id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
19858
https://en.wikipedia.org/wiki/Model%20theory
Model theory
In mathematical logic, model theory is the study of the relationship between formal theories (a collection of sentences in a formal language expressing statements about a mathematical structure), and their models (those structures in which the statements of the theory hold). The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory. Compared to other areas of mathematical logic such as proof theory, model theory is often less concerned with formal rigour and closer in spirit to classical mathematics. This has prompted the comment that "if proof theory is about the sacred, then model theory is about the profane". The applications of model theory to algebraic and Diophantine geometry reflect this proximity to classical mathematics, as they often involve an integration of algebraic and model-theoretic results and techniques. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature. The most prominent scholarly organization in the field of model theory is the Association for Symbolic Logic. Overview This page focuses on finitary first order model theory of infinite structures. The relative emphasis placed on the class of models of a theory as opposed to the class of definable sets within a model fluctuated in the history of the subject, and the two directions are summarised by the pithy characterisations from 1973 and 1997 respectively: model theory = universal algebra + logic where universal algebra stands for mathematical structures and logic for logical theories; and model theory = algebraic geometry − fields. where logical formulas are to definable sets what equations are to varieties over a field. Nonetheless, the interplay of classes of models and the sets definable in them has been crucial to the development of model theory throughout its history. For instance, while stability was originally introduced to classify theories by their numbers of models in a given cardinality, stability theory proved crucial to understanding the geometry of definable sets. Fundamental notions of first-order model theory First-order logic A first-order formula is built out of atomic formulas such as or by means of the Boolean connectives and prefixing of quantifiers or . A sentence is a formula in which each occurrence of a variable is in the scope of a corresponding quantifier. Examples for formulas are (or to indicate is the unbound variable in ) and (or ), defined as follows: (Note that the equality symbol has a double meaning here.) It is intuitively clear how to translate such formulas into mathematical meaning. In the semiring of natural numbers , viewed as a structure with binary functions for addition and multiplication and constants for 0 and 1 of the natural numbers, for example, an element satisfies the formula if and only if is a prime number. The formula similarly defines irreducibility. Tarski gave a rigorous definition, sometimes called "Tarski's definition of truth", for the satisfaction relation , so that one easily proves: is a prime number. is irreducible. A set of sentences is called a (first-order) theory, which takes the sentences in the set as its axioms. A theory is satisfiable if it has a model , i.e. a structure (of the appropriate signature) which satisfies all the sentences in the set . A complete theory is a theory that contains every sentence or its negation. The complete theory of all sentences satisfied by a structure is also called the theory of that structure. It's a consequence of Gödel's completeness theorem (not to be confused with his incompleteness theorems) that a theory has a model if and only if it is consistent, i.e. no contradiction is proved by the theory. Therefore, model theorists often use "consistent" as a synonym for "satisfiable". Basic model-theoretic concepts A signature or language is a set of non-logical symbols such that each symbol is either a constant symbol, or a function or relation symbol with a specified arity. Note that in some literature, constant symbols are considered as function symbols with zero arity, and hence are omitted. A structure is a set together with interpretations of each of the symbols of the signature as relations and functions on (not to be confused with the formal notion of an "interpretation" of one structure in another). Example: A common signature for ordered rings is , where and are 0-ary function symbols (also known as constant symbols), and are binary (= 2-ary) function symbols, is a unary (= 1-ary) function symbol, and is a binary relation symbol. Then, when these symbols are interpreted to correspond with their usual meaning on (so that e.g. is a function from to and is a subset of ), one obtains a structure . A structure is said to model a set of first-order sentences in the given language if each sentence in is true in with respect to the interpretation of the signature previously specified for . (Again, not to be confused with the formal notion of an "interpretation" of one structure in another) A model of is a structure that models . A substructure of a σ-structure is a subset of its domain, closed under all functions in its signature σ, which is regarded as a σ-structure by restricting all functions and relations in σ to the subset. This generalises the analogous concepts from algebra; for instance, a subgroup is a substructure in the signature with multiplication and inverse. A substructure is said to be elementary if for any first-order formula and any elements a1, ..., an of , if and only if . In particular, if is a sentence and an elementary substructure of , then if and only if . Thus, an elementary substructure is a model of a theory exactly when the superstructure is a model. Example: While the field of algebraic numbers is an elementary substructure of the field of complex numbers , the rational field is not, as we can express "There is a square root of 2" as a first-order sentence satisfied by but not by . An embedding of a σ-structure into another σ-structure is a map f: A → B between the domains which can be written as an isomorphism of with a substructure of . If it can be written as an isomorphism with an elementary substructure, it is called an elementary embedding. Every embedding is an injective homomorphism, but the converse holds only if the signature contains no relation symbols, such as in groups or fields. A field or a vector space can be regarded as a (commutative) group by simply ignoring some of its structure. The corresponding notion in model theory is that of a reduct of a structure to a subset of the original signature. The opposite relation is called an expansion - e.g. the (additive) group of the rational numbers, regarded as a structure in the signature {+,0} can be expanded to a field with the signature {×,+,1,0} or to an ordered group with the signature {+,0,<}. Similarly, if σ' is a signature that extends another signature σ, then a complete σ'-theory can be restricted to σ by intersecting the set of its sentences with the set of σ-formulas. Conversely, a complete σ-theory can be regarded as a σ'-theory, and one can extend it (in more than one way) to a complete σ'-theory. The terms reduct and expansion are sometimes applied to this relation as well. Compactness and the Löwenheim-Skolem theorem The compactness theorem states that a set of sentences S is satisfiable if every finite subset of S is satisfiable. The analogous statement with consistent instead of satisfiable is trivial, since every proof can have only a finite number of antecedents used in the proof. The completeness theorem allows us to transfer this to satisfiability. However, there are also several direct (semantic) proofs of the compactness theorem. As a corollary (i.e., its contrapositive), the compactness theorem says that every unsatisfiable first-order theory has a finite unsatisfiable subset. This theorem is of central importance in model theory, where the words "by compactness" are commonplace. Another cornerstone of first-order model theory is the Löwenheim-Skolem theorem. According to the Löwenheim-Skolem Theorem, every infinite structure in a countable signature has a countable elementary substructure. Conversely, for any infinite cardinal κ every infinite structure in a countable signature that is of cardinality less than κ can be elementarily embedded in another structure of cardinality κ (There is a straightforward generalisation to uncountable signatures). In particular, the Löwenheim-Skolem Theorem implies that any theory in a countable signature with infinite models has a countable model as well as arbitrarily large models. In a certain sense made precise by Lindström's theorem, first-order logic is the most expressive logic for which both the Löwenheim–Skolem theorem and the compactness theorem hold. Definability Definable sets In model theory, definable sets are important objects of study. For instance, in the formula defines the subset of prime numbers, while the formula defines the subset of even numbers. In a similar way, formulas with n free variables define subsets of . For example, in a field, the formula defines the curve of all such that . Both of the definitions mentioned here are parameter-free, that is, the defining formulas don't mention any fixed domain elements. However, one can also consider definitions with parameters from the model. For instance, in , the formula uses the parameter from to define a curve. Eliminating quantifiers In general, definable sets without quantifiers are easy to describe, while definable sets involving possibly nested quantifiers can be much more complicated. This makes quantifier elimination a crucial tool for analysing definable sets: A theory T has quantifier elimination if every first-order formula φ(x1, ..., xn) over its signature is equivalent modulo T to a first-order formula ψ(x1, ..., xn) without quantifiers, i.e. holds in all models of T. If the theory of a structure has quantifier elimination, every set definable in a structure is definable by a quantifier-free formula over the same parameters as the original definition. For example, the theory of algebraically closed fields in the signature σring = (×,+,−,0,1) has quantifier elimination. This means that in an algebraically closed field, every formula is equivalent to a Boolean combination of equations between polynomials. If a theory does not have quantifier elimination, one can add additional symbols to its signature so that it does. Axiomatisability and quantifier elimination results for specific theories, especially in algebra, were among the early landmark results of model theory. But often instead of quantifier elimination a weaker property suffices: A theory T is called model-complete if every substructure of a model of T which is itself a model of T is an elementary substructure. There is a useful criterion for testing whether a substructure is an elementary substructure, called the Tarski–Vaught test. It follows from this criterion that a theory T is model-complete if and only if every first-order formula φ(x1, ..., xn) over its signature is equivalent modulo T to an existential first-order formula, i.e. a formula of the following form: , where ψ is quantifier free. A theory that is not model-complete may have a model completion, which is a related model-complete theory that is not, in general, an extension of the original theory. A more general notion is that of a model companion. Minimality In every structure, every finite subset is definable with parameters: Simply use the formula . Since we can negate this formula, every cofinite subset (which includes all but finitely many elements of the domain) is also always definable. This leads to the concept of a minimal structure. A structure is called minimal if every subset definable with parameters from is either finite or cofinite. The corresponding concept at the level of theories is called strong minimality: A theory T is called strongly minimal if every model of T is minimal. A structure is called strongly minimal if the theory of that structure is strongly minimal. Equivalently, a structure is strongly minimal if every elementary extension is minimal. Since the theory of algebraically closed fields has quantifier elimination, every definable subset of an algebraically closed field is definable by a quantifier-free formula in one variable. Quantifier-free formulas in one variable express Boolean combinations of polynomial equations in one variable, and since a nontrivial polynomial equation in one variable has only a finite number of solutions, the theory of algebraically closed fields is strongly minimal. On the other hand, the field of real numbers is not minimal: Consider, for instance, the definable set . This defines the subset of non-negative real numbers, which is neither finite nor cofinite. One can in fact use to define arbitrary intervals on the real number line. It turns out that these suffice to represent every definable subset of . This generalisation of minimality has been very useful in the model theory of ordered structures. A densely totally ordered structure in a signature including a symbol for the order relation is called o-minimal if every subset definable with parameters from is a finite union of points and intervals. Definable and interpretable structures Particularly important are those definable sets that are also substructures, i. e. contain all constants and are closed under function application. For instance, one can study the definable subgroups of a certain group. However, there is no need to limit oneself to substructures in the same signature. Since formulas with n free variables define subsets of , n-ary relations can also be definable. Functions are definable if the function graph is a definable relation, and constants are definable if there is a formula such that a is the only element of such that is true. In this way, one can study definable groups and fields in general structures, for instance, which has been important in geometric stability theory. One can even go one step further, and move beyond immediate substructures. Given a mathematical structure, there are very often associated structures which can be constructed as a quotient of part of the original structure via an equivalence relation. An important example is a quotient group of a group. One might say that to understand the full structure one must understand these quotients. When the equivalence relation is definable, we can give the previous sentence a precise meaning. We say that these structures are interpretable. A key fact is that one can translate sentences from the language of the interpreted structures to the language of the original structure. Thus one can show that if a structure interprets another whose theory is undecidable, then itself is undecidable. Types Basic notions For a sequence of elements of a structure and a subset A of , one can consider the set of all first-order formulas with parameters in A that are satisfied by . This is called the complete (n-)type realised by over A. If there is an automorphism of that is constant on A and sends to respectively, then and realise the same complete type over A. The real number line , viewed as a structure with only the order relation {<}, will serve as a running example in this section. Every element satisfies the same 1-type over the empty set. This is clear since any two real numbers a and b are connected by the order automorphism that shifts all numbers by b-a. The complete 2-type over the empty set realised by a pair of numbers depends on their order: either , or . Over the subset of integers, the 1-type of a non-integer real number a depends on its value rounded down to the nearest integer. More generally, whenever is a structure and A a subset of , a (partial) n-type over A is a set of formulas p with at most n free variables that are realised in an elementary extension of . If p contains every such formula or its negation, then p is complete. The set of complete n-types over A is often written as . If A is the empty set, then the type space only depends on the theory of . The notation is commonly used for the set of types over the empty set consistent with . If there is a single formula such that the theory of implies for every formula in p, then p is called isolated. Since the real numbers are Archimedean, there is no real number larger than every integer. However, a compactness argument shows that there is an elementary extension of the real number line in which there is an element larger than any integer. Therefore, the set of formulas is a 1-type over that is not realised in the real number line . A subset of that can be expressed as exactly those elements of realising a certain type over A is called type-definable over A. For an algebraic example, suppose is an algebraically closed field. The theory has quantifier elimination . This allows us to show that a type is determined exactly by the polynomial equations it contains. Thus the set of complete -types over a subfield corresponds to the set of prime ideals of the polynomial ring , and the type-definable sets are exactly the affine varieties. Structures and types While not every type is realised in every structure, every structure realises its isolated types. If the only types over the empty set that are realised in a structure are the isolated types, then the structure is called atomic. On the other hand, no structure realises every type over every parameter set; if one takes all of as the parameter set, then every 1-type over realised in is isolated by a formula of the form a = x for an . However, any proper elementary extension of contains an element that is not in . Therefore, a weaker notion has been introduced that captures the idea of a structure realising all types it could be expected to realise. A structure is called saturated if it realises every type over a parameter set that is of smaller cardinality than itself. While an automorphism that is constant on A will always preserve types over A, it is generally not true that any two sequences and that satisfy the same type over A can be mapped to each other by such an automorphism. A structure in which this converse does hold for all A of smaller cardinality than is called homogeneous. The real number line is atomic in the language that contains only the order , since all n-types over the empty set realised by in are isolated by the order relations between the . It is not saturated, however, since it does not realise any 1-type over the countable set that implies x to be larger than any integer. The rational number line is saturated, in contrast, since is itself countable and therefore only has to realise types over finite subsets to be saturated. Stone spaces The set of definable subsets of over some parameters is a Boolean algebra. By Stone's representation theorem for Boolean algebras there is a natural dual topological space, which consists exactly of the complete -types over . The topology generated by sets of the form for single formulas . This is called the Stone space of n-types over A. This topology explains some of the terminology used in model theory: The compactness theorem says that the Stone space is a compact topological space, and a type p is isolated if and only if p is an isolated point in the Stone topology. While types in algebraically closed fields correspond to the spectrum of the polynomial ring, the topology on the type space is the constructible topology: a set of types is basic open iff it is of the form or of the form . This is finer than the Zariski topology. Constructing models Realising and omitting types Constructing models that realise certain types and do not realise others is an important task in model theory. Not realising a type is referred to as omitting it, and is generally possible by the (Countable) Omitting types theorem: Let be a theory in a countable signature and let be a countable set of non-isolated types over the empty set. Then there is a model of which omits every type in . This implies that if a theory in a countable signature has only countably many types over the empty set, then this theory has an atomic model. On the other hand, there is always an elementary extension in which any set of types over a fixed parameter set is realised: Let be a structure and let be a set of complete types over a given parameter set Then there is an elementary extension of which realises every type in . However, since the parameter set is fixed and there is no mention here of the cardinality of , this does not imply that every theory has a saturated model. In fact, whether every theory has a saturated model is independent of the Zermelo-Fraenkel axioms of set theory, and is true if the generalised continuum hypothesis holds. Ultraproducts Ultraproducts are used as a general technique for constructing models that realise certain types. An ultraproduct is obtained from the direct product of a set of structures over an index set by identifying those tuples that agree on almost all entries, where almost all is made precise by an ultrafilter on . An ultraproduct of copies of the same structure is known as an ultrapower. The key to using ultraproducts in model theory is Łoś's theorem: Let be a set of -structures indexed by an index set and an ultrafilter on . Then any -formula is true in the ultraproduct of the by if the set of all for which lies in . In particular, any ultraproduct of models of a theory is itself a model of that theory, and thus if two models have isomorphic ultrapowers, they are elementarily equivalent. The Keisler-Shelah theorem provides a converse: If and are elementary equivalent, then there is a set and an ultrafilter on such that the ultrapowers by of and : are isomorphic. Therefore, ultraproducts provide a way to talk about elementary equivalence that avoids mentioning first-order theories at all. Basic theorems of model theory such as the compactness theorem have alternative proofs using ultraproducts, and they can be used to construct saturated elementary extensions if they exist. Categoricity A theory was originally called categorical if it determines a structure up to isomorphism. It turns out that this definition is not useful, due to serious restrictions in the expressivity of first-order logic. The Löwenheim–Skolem theorem implies that if a theory T has an infinite model for some infinite cardinal number, then it has a model of size for any sufficiently large cardinal number . Since two models of different sizes cannot possibly be isomorphic, only finite structures can be described by a categorical theory. However, the weaker notion of -categoricity for a cardinal has become a key concept in model theory. A theory T is called -categorical if any two models of T that are of cardinality are isomorphic. It turns out that the question of -categoricity depends critically on whether is bigger than the cardinality of the language (i.e. , where is the cardinality of the signature). For finite or countable signatures this means that there is a fundamental difference between -cardinality and -cardinality for uncountable . -categoricity -categorical theories can be characterised by properties of their type space: For a complete first-order theory T in a finite or countable signature the following conditions are equivalent: T is -categorical. Every type in Sn(T) is isolated. For every natural number n, Sn(T) is finite. For every natural number n, the number of formulas φ(x1, ..., xn) in n free variables, up to equivalence modulo T, is finite. The theory of , which is also the theory of , is -categorical, as every n-type over the empty set is isolated by the pairwise order relation between the . This means that every countable dense linear order is order-isomorphic to the rational number line. On the other hand, the theories of , and as fields are not -categorical. This follows from the fact that in all those fields, any of the infinitely many natural numbers can be defined by a formula of the form . -categorical theories and their countable models also have strong ties with oligomorphic groups: A complete first-order theory T in a finite or countable signature is -categorical if and only if its automorphism group is oligomorphic. The equivalent characterisations of this subsection, due independently to Engeler, Ryll-Nardzewski and Svenonius, are sometimes referred to as the Ryll-Nardzewski theorem. In combinatorial signatures, a common source of -categorical theories are Fraïssé limits, which are obtained as the limit of amalgamating all possible configurations of a class of finite relational structures. Uncountable categoricity Michael Morley showed in 1963 that there is only one notion of uncountable categoricity for theories in countable languages. Morley's categoricity theorem If a first-order theory T in a finite or countable signature is -categorical for some uncountable cardinal , then T is κ-categorical for all uncountable cardinals . Morley's proof revealed deep connections between uncountable categoricity and the internal structure of the models, which became the starting point of classification theory and stability theory. Uncountably categorical theories are from many points of view the most well-behaved theories. In particular, complete strongly minimal theories are uncountably categorical. This shows that the theory of algebraically closed fields of a given characteristic is uncountably categorical, with the transcendence degree of the field determining its isomorphism type. A theory that is both -categorical and uncountably categorical is called totally categorical. Stability theory A key factor in the structure of the class of models of a first-order theory is its place in the stability hierarchy. A complete theory T is called -stable for a cardinal if for any model of T and any parameter set of cardinality not exceeding , there are at most complete T-types over A. A theory is called stable if it is -stable for some infinite cardinal . Traditionally, theories that are -stable are called -stable. The stability hierarchy A fundamental result in stability theory is the stability spectrum theorem, which implies that every complete theory T in a countable signature falls in one of the following classes: There are no cardinals such that T is -stable. T is -stable if and only if (see Cardinal exponentiation for an explanation of ). T is -stable for any (where is the cardinality of the continuum). A theory of the first type is called unstable, a theory of the second type is called strictly stable and a theory of the third type is called superstable. Furthermore, if a theory is -stable, it is stable in every infinite cardinal, so -stability is stronger than superstability. Many construction in model theory are easier when restricted to stable theories; for instance, every model of a stable theory has a saturated elementary extension, regardless of whether the generalised continuum hypothesis is true. Shelah's original motivation for studying stable theories was to decide how many models a countable theory has of any uncountable cardinality. If a theory is uncountably categorical, then it is -stable. More generally, the Main gap theorem implies that if there is an uncountable cardinal such that a theory T has less than models of cardinality , then T is superstable. Geometric stability theory The stability hierarchy is also crucial for analysing the geometry of definable sets within a model of a theory. In -stable theories, Morley rank is an important dimension notion for definable sets S within a model. It is defined by transfinite induction: The Morley rank is at least 0 if S is non-empty. For α a successor ordinal, the Morley rank is at least α if in some elementary extension N of M, the set S has infinitely many disjoint definable subsets, each of rank at least α − 1. For α a non-zero limit ordinal, the Morley rank is at least α if it is at least β for all β less than α. A theory T in which every definable set has well-defined Morley Rank is called totally transcendental; if T is countable, then T is totally transcendental if and only if T is -stable. Morley Rank can be extended to types by setting the Morley Rank of a type to be the minimum of the Morley ranks of the formulas in the type. Thus, one can also speak of the Morley rank of an element a over a parameter set A, defined as the Morley rank of the type of a over A. There are also analogues of Morley rank which are well-defined if and only if a theory is superstable (U-rank) or merely stable (Shelah's -rank). Those dimension notions can be used to define notions of independence and of generic extensions. More recently, stability has been decomposed into simplicity and "not the independence property" (NIP). Simple theories are those theories in which a well-behaved notion of independence can be defined, while NIP theories generalise o-minimal structures. They are related to stability since a theory is stable if and only if it is NIP and simple, and various aspects of stability theory have been generalised to theories in one of these classes. Non-elementary model theory Model-theoretic results have been generalised beyond elementary classes, that is, classes axiomatisable by a first-order theory. Model theory in higher-order logics or infinitary logics is hampered by the fact that completeness and compactness do not in general hold for these logics. This is made concrete by Lindstrom's theorem, stating roughly that first-order logic is essentially the strongest logic in which both the Löwenheim-Skolem theorems and compactness hold. However, model theoretic techniques have been developed extensively for these logics too. It turns out, however, that much of the model theory of more expressive logical languages is independent of Zermelo-Fraenkel set theory. More recently, alongside the shift in focus to complete stable and categorical theories, there has been work on classes of models defined semantically rather than axiomatised by a logical theory. One example is homogeneous model theory, which studies the class of substructures of arbitrarily large homogeneous models. Fundamental results of stability theory and geometric stability theory generalise to this setting. As a generalisation of strongly minimal theories, quasiminimally excellent classes are those in which every definable set is either countable or co-countable. They are key to the model theory of the complex exponential function. The most general semantic framework in which stability is studied are abstract elementary classes, which are defined by a strong substructure relation generalising that of an elementary substructure. Even though its definition is purely semantic, every abstract elementary class can be presented as the models of a first-order theory which omit certain types. Generalising stability-theoretic notions to abstract elementary classes is an ongoing research program. Selected applications Among the early successes of model theory are Tarski's proofs of quantifier elimination for various algebraically interesting classes, such as the real closed fields, Boolean algebras and algebraically closed fields of a given characteristic. Quantifier elimination allowed Tarski to show that the first-order theories of real-closed and algebraically closed fields as well as the first-order theory of Boolean algebras are decidable, classify the Boolean algebras up to elementary equivalence and show that the theories of real-closed fields and algebraically closed fields of a given characteristic are unique. Furthermore, quantifier elimination provided a precise description of definable relations on algebraically closed fields as algebraic varieties and of the definable relations on real-closed fields as semialgebraic sets In the 1960s, the introduction of the ultraproduct construction led to new applications in algebra. This includes Ax's work on pseudofinite fields, proving that the theory of finite fields is decidable, and Ax and Kochen's proof of as special case of Artin's conjecture on diophantine equations, the Ax-Kochen theorem. The ultraproduct construction also led to Abraham Robinson's development of nonstandard analysis, which aims to provide a rigorous calculus of infinitesimals. More recently, the connection between stability and the geometry of definable sets led to several applications from algebraic and diophantine geometry, including Ehud Hrushovski's 1996 proof of the geometric Mordell-Lang conjecture in all characteristics In 2001, similar methods were used to prove a generalisation of the Manin-Mumford conjecture. In 2011, Jonathan Pila applied techniques around o-minimality to prove the André-Oort conjecture for products of Modular curves. In a separate strand of inquiries that also grew around stable theories, Laskowski showed in 1992 that NIP theories describe exactly those definable classes that are PAC-learnable in machine learning theory. This has led to several interactions between these separate areas. In 2018, the correspondence was extended as Hunter and Chase showed that stable theories correspond to online learnable classes. History Model theory as a subject has existed since approximately the middle of the 20th century, and the name was coined by Alfred Tarski, a member of the Lwów–Warsaw school, in 1954. However some earlier research, especially in mathematical logic, is often regarded as being of a model-theoretical nature in retrospect. The first significant result in what is now model theory was a special case of the downward Löwenheim–Skolem theorem, published by Leopold Löwenheim in 1915. The compactness theorem was implicit in work by Thoralf Skolem, but it was first published in 1930, as a lemma in Kurt Gödel's proof of his completeness theorem. The Löwenheim–Skolem theorem and the compactness theorem received their respective general forms in 1936 and 1941 from Anatoly Maltsev. The development of model theory as an independent discipline was brought on by Alfred Tarski during the interbellum. Tarski's work included logical consequence, deductive systems, the algebra of logic, the theory of definability, and the semantic definition of truth, among other topics. His semantic methods culminated in the model theory he and a number of his Berkeley students developed in the 1950s and '60s. In the further history of the discipline, different strands began to emerge, and the focus of the subject shifted. In the 1960s, techniques around ultraproducts became a popular tool in model theory. At the same time, researchers such as James Ax were investigating the first-order model theory of various algebraic classes, and others such as H. Jerome Keisler were extending the concepts and results of first-order model theory to other logical systems. Then, inspired by Morley's problem, Shelah developed stability theory. His work around stability changed the complexion of model theory, giving rise to a whole new class of concepts. This is known as the paradigm shift. Over the next decades, it became clear that the resulting stability hierarchy is closely connected to the geometry of sets that are definable in those models; this gave rise to the subdiscipline now known as geometric stability theory. An example of an influential proof from geometric model theory is Hrushovski's proof of the Mordell–Lang conjecture for function fields. Connections to related branches of mathematical logic Finite model theory Finite model theory, which concentrates on finite structures, diverges significantly from the study of infinite structures in both the problems studied and the techniques used. In particular, many central results of classical model theory that fail when restricted to finite structures. This includes the compactness theorem, Gödel's completeness theorem, and the method of ultraproducts for first-order logic. At the interface of finite and infinite model theory are algorithmic or computable model theory and the study of 0-1 laws, where the infinite models of a generic theory of a class of structures provide information on the distribution of finite models. Prominent application areas of FMT are descriptive complexity theory, database theory and formal language theory. Set theory Any set theory (which is expressed in a countable language), if it is consistent, has a countable model; this is known as Skolem's paradox, since there are sentences in set theory which postulate the existence of uncountable sets and yet these sentences are true in our countable model. Particularly the proof of the independence of the continuum hypothesis requires considering sets in models which appear to be uncountable when viewed from within the model, but are countable to someone outside the model. The model-theoretic viewpoint has been useful in set theory; for example in Kurt Gödel's work on the constructible universe, which, along with the method of forcing developed by Paul Cohen can be shown to prove the (again philosophically interesting) independence of the axiom of choice and the continuum hypothesis from the other axioms of set theory. In the other direction, model theory is itself formalised within Zermelo-Fraenkel set theory. For instance, the development of the fundamentals of model theory (such as the compactness theorem) rely on the axiom of choice, and is in fact equivalent over Zermelo-Fraenkel set theory without choice to the Boolean prime ideal theorem. Other results in model theory depend on set-theoretic axioms beyond the standard ZFC framework. For example, if the Continuum Hypothesis holds then every countable model has an ultrapower which is saturated (in its own cardinality). Similarly, if the Generalized Continuum Hypothesis holds then every model has a saturated elementary extension. Neither of these results are provable in ZFC alone. Finally, some questions arising from model theory (such as compactness for infinitary logics) have been shown to be equivalent to large cardinal axioms.
Mathematics
Model theory
null
19869
https://en.wikipedia.org/wiki/Massachusetts%20Bay%20Transportation%20Authority
Massachusetts Bay Transportation Authority
The Massachusetts Bay Transportation Authority (abbreviated MBTA and known colloquially as "the T") is the public agency responsible for operating most public transportation services in Greater Boston, Massachusetts. The MBTA transit network includes the MBTA subway with three metro lines (the Blue, Orange, and Red lines), two light rail lines (the Green and Mattapan lines), and a five-line bus rapid transit system (the Silver Line); MBTA bus local and express service; the twelve-line MBTA Commuter Rail system, and several ferry routes. In , the system had a ridership of , or about per weekday as of , of which the rapid transit lines averaged and the light rail lines , making it the fourth-busiest rapid transit system and the third-busiest light rail system in the United States. As of , average weekday ridership of the commuter rail system was , making it the fifth-busiest commuter rail system in the U.S. The MBTA is the successor of several previous public and private operators. Privately operated transit in Boston began with commuter rail in 1834 and horsecar lines in 1856. The various horsecar companies were consolidated under the West End Street Railway in the 1880s and electrified over the next decade. The Boston Elevated Railway (BERy) succeeded the West End in 1897; over the next several decades, the BERy built a partially-publicly owned rapid transit system, beginning with the Tremont Street subway in 1897. The BERy came under the control of public trustees in 1919, and was subsumed into the fully-publicly owned Metropolitan Transit Authority (MTA) in 1947. The MTA was in turn succeeded in 1964 by the MBTA, with an expanded funding district to fund declining suburban commuter rail service. In its first two decades, the MBTA took over the commuter rail system from the private operators and continued expansion of the rapid transit system. Originally established as an individual department within the Commonwealth of Massachusetts, the MBTA became a division of the Massachusetts Department of Transportation (MassDOT) in 2009. History Mass transportation in Boston was provided by private companies, often granted charters by the state legislature for limited monopolies, with powers of eminent domain to establish a right-of-way, until the creation of the MTA in 1947. Development of mass transportation both followed and shaped economic and population patterns. Railways Shortly after the steam locomotive became practical for mass transportation, the private Boston and Lowell Railroad was chartered in 1830. The rail, which opened in 1835, connected Boston to Lowell, a major northerly mill town in northeast Massachusetts' Merrimack Valley, via one of the oldest railroads in North America. This marked the beginning of the development of American intercity railroads, which in Massachusetts would later become the MBTA Commuter Rail system and the Green Line D branch. Streetcars Starting with the opening of the Cambridge Railroad on March 26, 1856, a profusion of streetcar lines appeared in Boston under chartered companies. Despite the change of companies, Boston is the city with the oldest continuously working streetcar system in the world. Many of these companies consolidated, and animal-drawn vehicles were converted to electric propulsion. Subways and elevated railways Streetcar congestion in downtown Boston led to the subways in 1897 and elevated rail in 1901. The Tremont Street subway was the first rapid transit tunnel in the United States. Grade-separation added capacity and avoided delays caused by cross streets. The first elevated railway and the first rapid transit line in Boston were built three years before the first underground line of the New York City Subway, but 34 years after the first London Underground lines, and long after the first elevated railway in New York City; its Ninth Avenue El started operations on July 1, 1868, in Manhattan as an elevated cable car line. Various extensions and branches were added at both ends, bypassing more surface tracks. As grade-separated lines were extended, street-running lines were cut back for faster downtown service. The last elevated heavy rail or "El" segments in Boston were at the extremities of the Orange Line: its northern end was relocated in 1975 from Everett to Malden, Massachusetts, and its southern end was relocated into the Southwest Corridor in 1987. However, the Green Line's Causeway Street Elevated remained in service until 2004, when it was relocated into a tunnel with an incline to reconnect to the Lechmere Viaduct. The Lechmere Viaduct and a short section of steel-framed elevated at its northern end remain in service, though the elevated section was cut back slightly and connected to a northwards viaduct extension as part of the Green Line Extension. Public enterprise The old elevated railways proved to be an eyesore and required several sharp curves in Boston's twisty streets. The Atlantic Avenue Elevated was closed in 1938 amidst declining ridership and was demolished in 1942. As rail passenger service became increasingly unprofitable, largely due to rising automobile ownership, government takeover prevented abandonment and dismantlement. The MTA purchased and took over subway, elevated, streetcar, and bus operations from the Boston Elevated Railway in 1947. In the 1950s, the MTA ran new subway extensions, while the last two streetcar lines running into the Pleasant Street Portal of the Tremont Street Subway were substituted with buses in 1953 and 1962. In 1958, the MTA purchased the Highland branch from the Boston and Albany Railroad, reopening it a year later as a rapid transit line (now the Green Line D branch). While the operations of the MTA were relatively stable by the early 1960s, the privately operated commuter rail lines were in freefall. The New Haven Railroad, New York Central Railroad, and Boston and Maine Railroad were all financially struggling; deferred maintenance was hurting the mainlines while most branch lines had been discontinued. The 1945 Coolidge Commission plan assumed that most of the commuter rail lines would be replaced by shorter rapid transit extensions, or simply feed into them at reduced service levels. Passenger service on the entire Old Colony Railroad system serving the southeastern part of the state was abandoned by the New Haven Railroad in 1959, triggering calls for state intervention. Between January 1963 and March 1964, the Mass Transportation Commission tested different fare and service levels on the B&M and New Haven systems. Determining that commuter rail operations were important but could not be financially self-sustaining, the MTC recommended an expansion of the MTA to commuter rail territory. On August 3, 1964, the MBTA succeeded the MTA, with an enlarged service area intended to fund continued commuter rail operations. The original 14-municipality MTA district was expanded to 78 cities and towns. Several lines were briefly cut back while contracts with out-of-district towns were reached, but, except for the outer portions of the Central Mass branch (cut back from Hudson to South Sudbury), West Medway branch (cut back from West Medway to Millis), Blackstone Line (cut back from Blackstone to Franklin), and B&M New Hampshire services (cut back from Portsmouth to Newburyport), these cuts were temporary; however, service on three branch lines (all of them with only one round trip daily: one morning rush-hour trip in to Boston, and one evening rush-hour trip back out to the suburbs) was dropped permanently between 1965 and 1976 (the Millis (the new name of the truncated West Medway branch) and Dedham Branches were discontinued in 1967, while the Central Mass branch was abandoned in 1971). The MBTA bought the Penn Central (New York Central and New Haven) commuter rail lines in January 1973, Penn Central equipment in April 1976, and all B&M commuter assets in December 1976; these purchases served to make the system state-owned with the private railroads retained solely as operators. Only two branch lines were abandoned after 1976: service on the Lexington branch (also with only one round trip daily) was discontinued in January 1977 after a snowstorm blocked the line, while the Lowell Line's full-service Woburn branch was eliminated in January 1981 due to poor track conditions. The MBTA assigned colors to its four rapid transit lines in 1965, and lettered the branches of the Green Line from north to south. Shortages of streetcars, among other factors, caused bustitution of rail service on two branches of the Green Line. The A branch ceased operating entirely in 1969 and was replaced by the 57 bus, while the E branch was truncated from Arborway to Heath Street in 1985, with the section between Heath Street and Arborway being replaced by the 39 bus. The MBTA purchased bus routes in the outer suburbs to the north and south from the Eastern Massachusetts Street Railway in 1968. As with the commuter rail system, many of the outlying routes were dropped shortly before or after the takeover due to low ridership and high operating costs. The MBTA started subsidizing the Middlesex and Boston Street Railway in 1964, and acquired it in 1972, creating its 5xx bus routes. In the 1970s, the MBTA received a boost from the Boston Transportation Planning Review area-wide re-evaluation of the role of mass transit relative to highways. Producing a moratorium on highway construction inside Route 128, numerous mass transit lines were planned for expansion by the Voorhees-Skidmore, Owings and Merrill-ESL consulting team. The removal of elevated lines continued, and the closure of the Washington Street Elevated in 1987 brought the end of rapid transit service to the Roxbury neighborhood. Between 1971 and 1985, the Red Line was extended both north and south, providing not only additional subway system coverage, but also major parking structures at several of the terminal and intermediate stations. In 1981, seventeen people and one corporation were indicted for their roles in a number of kickback schemes at the MBTA. Massachusetts Secretary of Transportation and MBTA Chairman Barry Locke was convicted of five counts of bribery and sentenced to 7 to 10 years in prison. 21st century By 1999, the district was expanded further to 175 cities and towns, adding most that were served by or adjacent to commuter rail lines, though the MBTA did not assume responsibility for local service in those communities adjacent to or served by commuter rail. In 2016, the Town of Bourne voted to join the MBTA district, bringing the number of MBTA communities to 176. Prior to July 1, 2000, the MBTA was reimbursed by the Commonwealth of Massachusetts for all costs above revenue collected (net cost of service). "Forward funding" introduced at that time consists of a dedicated revenue stream from assessments on served cities and towns, along with a 20% portion of the 5% state sales tax. The Commonwealth assigned to the MBTA responsibility for increasing public transit to compensate for increased automobile pollution from the Big Dig. However, these projects have strained the MBTA's limited resources, since the Big Dig project did not include funding for these improvements. Since 1988, the MBTA has been the fastest expanding transit system in the country, even as Greater Boston has been one of the slowest growing metropolitan areas in the United States. The MBTA subsequently went into debt, and rates underwent an appreciable hike on January 1, 2007. In 2006, the creation of the MetroWest Regional Transit Authority saw several towns subtract their MWRTA assessment from their MBTA assessment, though the amount of funding the MBTA received remained the same. The next year, the MBTA started commuter rail service to the Greenbush section of Scituate, the third branch of the Old Colony service. Rhode Island also paid for extensions of the Providence/Stoughton Line to T.F. Green Airport in 2010 and Wickford Junction in 2012. A new station on the Fairmount Line, the Talbot Avenue station, opened in November 2012. On June 26, 2009, Governor Deval Patrick signed a law to place the MBTA along with other state transportation agencies within the administrative authority of the Massachusetts Department of Transportation (MassDOT), with the MBTA now part of the Mass Transit division (MassTrans). The 2009 transportation law continued the MBTA corporate structure and changed the MBTA board membership to the five Governor-appointed members of the Mass DOT Board. In February 2015, there was record breaking snowfall in Boston from the 2014–15 North American winter, which caused lengthy closures of portions of the MBTA subway system, and many long-term operational and financial problems with the entire MBTA system coming under greater public attention, Massachusetts Governor Charlie Baker subsequently announced the formation of a special advisory panel to diagnose the MBTA's problems and write a report recommending proposals to address them. The special advisory panel formed the previous February released its report in April 2015. On March 19, 2015, using a grassroots tool, GovOnTheT, Steve Kropper, and Michele Rapp enlisted 65 Massachusetts General Court legislators to ride the T to the State House, pairing them with 85 TV, radio, electronic, and print reporters. The event responded to widespread anger directed at the governor, state legislators, and MBTA management. The pairings helped to raise awareness of the problems with the T and contributed to its restructuring and refinancing. The next month, Baker appointed a new MassDOT Board of Directors and proposed a five-year winter resiliency plan with $83 million being spent to update infrastructure, purchase new equipment, and improve operations during severe weather. A new state law established the MBTA Fiscal and Management Control Board, effective July 17, 2015, with expanded powers to reform the agency during five years. Its term was extended by another year in 2020. Construction of the Green Line Extension, the first expansion to the rail rapid transit system since 1987, began in 2018. In April 2018, the MBTA Silver Line began operating a route from Chelsea to South Station. A June 2019 Red Line derailment resulted in train delays for several months, which brought more attention to capital maintenance problems at the T. After complaints from many riders and business groups, the governor proposed adding $50 million for an independent team to speed up inspections and capital projects, and general efforts to speed up existing capital spending from $1 billion to $1.5 billion per year. Replacement of the Red Line signal system was accelerated, including equipment that was damaged in the derailment. Baker proposed allocating to the MBTA $2.7 billion from the state's five-year transportation bond bill plus more money from the proposed multi-state Transportation and Climate Initiative. A December 2019 report by the MBTA's Fiscal and Management Control Board panel found that "safety is not the priority at the T, but it must be." The report said, "There is a general feeling that fiscal controls over the years may have gone too far, which coupled with staff cutting has resulted in the inability to accomplish required maintenance and inspections, or has hampered work keeping legacy system assets fully functional." In June 2021, the Fiscal and Management Control Board was dissolved, and the following month, Baker signed into law a supplemental budget bill that included a provision creating a permanent MBTA Board of Directors, and Baker appointed the new board the following October. In February 2022, MBTA staff reported to the MBTA Board of Directors safety subcommittee that of 61 recommendations made by the Fiscal and Management Control Board in 2019, two-thirds were complete and one-third were on progress or on hold (including all financial review recommendations). In April 2022, the Federal Transit Administration announced in a letter to MBTA General Manager Steve Poftak that it would assume an increased safety oversight role over the MBTA and would conduct a safety management inspection. As of 2022, the MBTA had reduced its greenhouse gas emissions by 47% from 2009 levels, and now buys or produces 100% renewable electricity. Services Subway The subway system has three heavy rail rapid transit lines (the Red, Orange and Blue Lines), and two light rail lines (the Green Line and the Mattapan Line, the latter designated an extension of the Red Line). The system operates according to a spoke-hub distribution paradigm, with the lines running radially between central Boston and its environs. It is common usage in Boston to refer to all four of the color-coded rail lines which run underground as "the subway" or "the T", regardless of the actual railcar equipment used. All four subway lines cross downtown, forming a quadrilateral configuration, and the Orange and Green Lines (which run approximately parallel in that district) also connect directly at two stations just north of downtown. The Red Line and Blue Line are the only pair of subway lines which do not have a direct transfer connection to each other. Because the various subway lines do not consistently run in any given compass direction, it is customary to refer to line directions as "inbound" or "outbound". Inbound trains travel towards the four downtown transfer stations, and outbound trains travel away from these hub stations. The Green Line has four branches in the west: B (Boston College), C (Cleveland Circle), D (Riverside), and E (Heath Street). The A branch formerly went to Watertown, filling in the north-to-south letter assignment pattern, and the E branch formerly continued beyond Heath Street to Arborway. The Red Line has two branches in the south, Ashmont and Braintree, named after their terminal stations. The colors were assigned on August 26, 1965, in conjunction with design standards developed by Cambridge Seven Associates, and have served as the primary identifier for the lines since the 1964 reorganization of the MTA into the MBTA. The Orange Line is so named because it used to run along Orange Street (now lower Washington Street), as the former "Orange Street" also was the street that joined the city to the mainland through Boston Neck in colonial times; the Green Line because it runs adjacent to parts of the Emerald Necklace park system; the Blue Line because it runs under Boston Harbor; and the Red Line because its northernmost station was, at that time, at Harvard University, whose school color is crimson. Opened in September 1897, the four-track-wide segment of the Green Line tunnel between Park Street and Boylston stations was the first subway in the United States, and has been designated a National Historic Landmark. The downtown portions of what are now the Green, Orange, Blue, and Red line tunnels were all in service by 1912. Additions to the rapid transit network occurred in most decades of the 1900s, and continue in the 2000s with the addition of Silver Line bus rapid transit and planned Green Line expansion. (See History and Future plans sections.) Buses The MBTA bus system, the nation's sixth largest by ridership, has 152 bus routes. Most routes provide local service in the urban core; smaller local networks are also centered around Waltham, Lynn, and Quincy. The system also includes longer routes serving less-dense suburbs, including several express routes. The buses are colored yellow on maps and in station decor. Most routes are directly operated by the MBTA, though several suburban routes are run by private operators under contract to the MBTA. The Silver Line is also operated as part of the MBTA bus system. It is designated as bus rapid transit (BRT), even though it lacks some of the characteristics of bus rapid transit. Two routes run on Washington Street between Nubian station and downtown Boston. Three "waterfront" routes run in a dedicated tunnel in South Boston and on the surface, elsewhere including the SL1 route that serves Logan Airport. Washington Street service, a belated replacement for the Washington Street Elevated, began in 2002 and was expanded in 2009. Waterfront service began in 2004, with an expansion to opened in 2018. MBTA predecessors formerly operated a large trolleybus network, much of which replaced surface streetcar lines. Four lines based out of Harvard station lasted until 2022, when they were replaced with conventional buses. Three Silver Line routes operated as trolleybuses in the Waterfront Tunnel using dual-mode buses until these were replaced with hybrid battery buses in 2023. Commuter rail The MBTA Commuter Rail system is a commuter rail network that reaches from Boston into the suburbs of eastern Massachusetts. The system consists of twelve main lines, three of which have two branches. The rail network operates according to a spoke-hub distribution paradigm, with the lines running radially outward from the city of Boston, with a total of of revenue trackage. Eight of the lines converge at South Station, with four of these passing through Back Bay station. The other four converge at North Station. There is no passenger connection between the two sides; the Grand Junction Railroad is used for non-revenue equipment moves accessing the maintenance facility. The North–South Rail Link has been proposed to connect the two halves of the system; it would be constructed under the Central Artery tunnel of the Big Dig. Special MBTA trains are run over the Franklin/Foxboro Line and the Providence/Stoughton Line to Foxborough station for New England Patriots home games and other events at Gillette Stadium. The CapeFLYER intercity service, operated on summer weekends, uses MBTA equipment and operates over the Middleborough/Lakeville Line. Amtrak runs regularly scheduled intercity rail service over four lines: the Lake Shore Limited over the Framingham/Worcester Line, Acela Express and Northeast Regional services over the Providence/Stoughton Line, and the Downeaster over sections of the Lowell Line and Haverhill Line. Freight trains run by Pan Am Southern, Pan Am Railways, CSX Transportation, the Providence and Worcester Railroad, and the Fore River Railroad also use parts of the network. The first commuter rail service in the United States was operated over what is now the Framingham/Worcester Line beginning in 1834. Within the next several decades, Boston was the center of a massive rail network, with eight trunk lines and dozens of branches. By 1900, ownership was consolidated under the Boston and Maine Railroad to the north, the New York Central Railroad to the west, and the New York, New Haven and Hartford Railroad to the south. Most branches and one trunk line – the former Old Colony Railroad main – had their passenger services discontinued during the middle of the 20th century. In 1964, the MBTA was formed to fund the failing suburban railroad operations, with an eye towards converting many to extensions of the existing rapid transit system. The first unified branding of the system was applied on October 8, 1974, with "MBTA Commuter Rail" naming and purple coloration analogous to the four subway lines. The system continued to shrink – mostly with the loss of marginal lines with one daily round trip – until 1981. The system has been expanded since, with four lines restored (Fairmount Line in 1979, Old Colony Lines in 1997, and Greenbush Line in 2007), six extended, and a number of stations added and rebuilt, especially on the Fairmount Line. Each commuter rail line has up to eleven fare zones, numbered 1A and 1 through 10. Riders are charged based on the number of zones they travel through. Tickets can be purchased on the train, from ticket counters or machines in some rail stations, or with a mobile app called mTicket. If a local vendor or ticket machine is available, riders will pay a surcharge for paying with cash on board. Fares range from $2.40 to $13.25, with multi-ride and monthly passes available, and $10 unlimited weekend passes. In 2016, the system averaged 122,600 daily riders, making it the fourth-busiest commuter rail system in the nation. Ferries The MBTA boat system comprises several ferry routes via Boston Harbor. One of these is an inner harbor service, linking the downtown waterfront with the Boston Navy Yard in Charlestown. The other routes are commuter routes, linking downtown to Hingham, Hull, and Salem. Some commuter services operate via Logan International Airport. All boat services are operated by private sector companies under contract to the MBTA. In FY2005, the MBTA boat system carried 4,650 passengers (0.41% of total MBTA passengers) per weekday. The service is provided through contract of the MBTA by Boston Harbor Cruises (BHC). Paratransit The MBTA contracts out operation of "The Ride", a door to door service for people with disabilities. Paratransit services carry 5,400 passengers on a typical weekday, or 0.47% of the MBTA system ridership. The two private service providers under contractual agreement with the MBTA for The Ride: Veterans Transportation LLC, and National Express Transit (NEXT). In September 2016, the MBTA announced that paratransit users would be able to get rides from Uber and Lyft. Riders would pay $2 for a pickup within a few minutes (more for longer trips worth more than $15) instead of $3.15 for a scheduled pickup the next day. The MBTA would pay $13 instead of $31 per ride ($46 per trip when fixed costs of The Ride are considered). Bicycles Conventional bicycles are generally allowed on MBTA commuter rail, commuter boat, and rapid transit lines during off-peak hours and all day on weekends and holidays. However, bicycles are not allowed at any time on the Green Line, or the Mattapan Line segment of the Red Line. Buses equipped with bike racks at the front (including the Silver Line) may always accommodate bicycles, up to the capacity limit of the racks. The MBTA claims that 95% of its buses are now equipped with bike racks. Due to congestion and tight clearances, bicycles are banned from Park Street, Downtown Crossing, and Government Center stations at all times. However, compact folding bicycles are permitted on all MBTA vehicles at all times, provided that they are kept completely folded for the duration of the trip, including passage through faregates. Gasoline-powered vehicles, bike trailers, and Segways are prohibited. No special permit is required to take a bicycle onto an MBTA vehicle, but bicyclists are expected to follow the rules and hours of operation. Cyclists under 16 years old are supposed to be accompanied by a parent or legal guardian. Detailed rules, and an explanation of how to use front-of-bus bike racks and bike parking are on the MBTA website. The MBTA says that over 95% of its stations are equipped with bike racks, many of them under cover from the weather. In addition, over a dozen stations are equipped with "Pedal & Park" fully enclosed areas protected with video surveillance and controlled door access, for improved security. To obtain access, a personally registered CharlieCard must be used. Registration is done online, and requires a valid email address and the serial number of the CharlieCard. All bike parking is free of charge. Parking , the MBTA operates park and ride facilities at 103 locations with a total capacity of 55,000 automobiles, and is the owner of the largest number of off-street paid parking spaces in New England. The number of spaces at stations with parking varies from a few dozen to over 2,500. The larger lots and garages are usually near a major highway exit, and most lots fill up during the morning rush hour. There are some 22,000 spaces on the southern portion of the commuter rail system, 9,400 on the northern portion and 14,600 at subway stations. The parking fee ranges from $4 to $7 per day, and overnight parking (maximum 7 days) is permitted at some stations. Management for a number of parking lots owned by the MBTA is handled by a private contractor. The 2012 contract with LAZ Parking (which was not its first) was terminated in 2017 after employees were discovered "skimming" revenue; the company paid $5.5 million to settle the case. A new contract with stronger performance incentives and anti-fraud penalties was then awarded to Republic Parking System of Tennessee. Customers parking in MBTA-owned and operated lots with existing cash "honor boxes" can pay for parking online or via phone while in their cars or once they board a train, bus, or commuter boat. , the MBTA switched from ParkMobile to PayByPhone as its provider for mobile parking payments by smartphone. Monthly parking permits are available, offering a modest discount. Detailed parking information by station is available online, including prices, estimated vacancy rate, and number of accessible and bicycle parking slots. , the MBTA has a policy for electric vehicle charging stations in its parking spaces, but does not yet have such facilities available. From time to time the MBTA has made various agreements with companies that contribute to commuting options. One company the MBTA selected was Zipcar; the MBTA provides Zipcar with a limited number of parking spaces at various subway stations throughout the system. Hours of operation Traditionally, the MBTA has stopped running around 1 a.m. each day. Like many subways worldwide, the MBTA's subway does not have parallel express and local tracks, so much rail maintenance is only done when the trains are not running. An MBTA spokesperson has said, "with a 109-year-old system you have to be out there every night" to do necessary maintenance. The MBTA did experiment with "Night Owl" substitute bus service from 2001 to 2005, but abandoned it because of insufficient ridership, citing a $7.53 per rider cost to keep the service open, five times the cost per passenger of an average bus route. A modified form of the MBTA's previous "Night Owl" service was experimentally reinstated starting in the spring of 2014 – this time, all subway lines were proposed to run until 3 am on weekends, along with the 15 most heavily used bus lines and the para-transit service "The Ride". Starting March 28, 2014, the late-night service began operation on a one-year trial basis, with service continuation depending on late-night ridership and on possible corporate sponsorship. , late-night ridership was stable, and much higher than the earlier failed experimental service. However, it is still unclear whether and on what basis the program might be extended past its first year. The extended hours program has not been implemented on the MBTA commuter rail operations. In early 2016, the MBTA decided that Late-Night service would be canceled because of lack of funding. The last night for late-night service was on March 19, 2016. The last train left at 2 a.m. on March 19, 2016. In 2018, the MBTA further tried "Early Morning and Late Night Bus Service Pilots". In June 2019, a year after the trials the board voted to make some changes to the schedule which would allow for further late night service to be incorporated long term Funding Fares and fare collection The MBTA has various fare structures for its various types of service. The plastic CharlieCard electronic farecard is accepted only on the subway and bus systems. Subway and bus systems also directly accept contactless payment via contactless credit card, Apple Pay, or Google Pay using the Charlie system. Commuter rail and ferry accept paper CharlieTickets and the mTicket mobile app. Only buses, surface trolleys, and Commuter Rail accept cash on board, which is discouraged (with a $3 fee for Commuter Rail for stations with fare vending machines). Passengers pay for subway and bus rides at faregates in station entrances or fareboxes in the front of vehicles; MBTA employees manually check tickets on the commuter rail and ferries. For paratransit service, instead of physical fare media passengers maintain an account to which funds can be added by web site, phone, mail, or in-person visit. Trips on The RIDE are booked in advance online or by phone, or subsidized on-demand trips can be requested via Uber or Lyft on those companies' mobile apps. Starting June 22, 2020, the short, urban Fairmount Line was incorporated into the subway fare structure in a pilot program that also started running weekday trips every 45 minutes. In addition to the usual Commuter Rail fare media, CharlieCards are now accepted by tapping at fare vending machines and obtaining proof of payment. Since the 1980s, the MBTA has offered discounted monthly passes on all modes for the convenience of daily commuters and other frequent riders. As of March 2022, it also offers one-day and seven-day passes (often used by tourists) for subway, bus, inner-harbor ferry, and Commuter Rail Zone 1A. Only the CharlieTicket versions of these passes are accepted on all modes. Single-ride CharlieTickets, weekend passes, 5-ride passes, and the mobile app used for the ferries and commuter rail are not accepted for transfers to buses or subways. The MBTA has periodically raised fares to match inflation and keep the system financially solvent. A substantial increase effective July 2012 raised public ire including an "Occupy the MBTA" protest. A transportation funding law passed in 2013 limits MBTA fare increases to 7% every two years. Subsequent fare increases took place in 2014, 2016, and 2019. Several local politicians, including Boston Mayor Michelle Wu, Representative Ayanna Pressley, and Senator Edward J. Markey, have proposed to eliminate MBTA fares. The ongoing "Fare Transformation" project adds contactless credit cards, Apple Pay, and Google Pay as payment methods for all subway and bus lines so passengers will not need to purchase a CharlieCard or CharlieTicket. This system was activated on August 1st, 2024. It also adds all-door boarding on all buses and surface trolleys, using a proof of payment system. A new website is planned to allow passengers and employers to perform self-service CharlieCard transactions. "Fare Transformation", originally scheduled to be completed in 2021 under the name "AFC 2.0", and activated on buses and subway lines on August 1st, 2024, is now expected to be completed in 2025. Subway and bus All subway trips (Green Line, Blue Line, Orange Line, Red Line, Mattapan Line, and the Waterfront section of the Silver Line) cost $2.40 for all users. Local bus and trackless trolley fares (including the Washington Street section of the Silver Line) are $1.70 for all users. Paying directly with cash is only available on buses, Green Line surface stops, and the Mattapan Line; from 2007 to 2020, the higher CharlieTicket price was charged. All transfers between subway lines are free with all fare media, without the need to pass through fare control (except when continuing in either direction at Ashmont Station). Passengers using CharlieCards can transfer free from a subway to a bus, and from a bus to a subway for the difference in price ("step-up fare"). CharlieTicket holders can transfer free between buses, but not between subway and bus, except free subway transfers are given for the Silver Line at Airport station and SL4/SL5 branches. The MBTA operates "Inner Express" and "Outer Express" buses to suburbs outside the subway system. Inner Express bus trips cost $4.25; Outer Express trips cost $5.25. Free transfers are available to the subway and local buses with a CharlieCard, and to local buses with a CharlieTicket. CharlieTickets are available from ticket vending machines in MBTA rapid transit stations. Following the installation of upgraded fare vending machines in July 2022, CharlieCards are now available for purchase from all subway lines and Silver Line stations. CharlieCards were not previously dispensed by the machines but were available free of charge on request at most MBTA Customer Service booths in stations, or at the CharlieCard Store at Downtown Crossing station. As given out, the CharlieCards are "empty", and must have value added at an MBTA ticket machine before they can be used. The fare system, including on-board and in-station fare vending machines, was purchased from German-based Scheidt and Bachmann, which developed the technology. The CharlieCards were developed by Gemalto and later by Giesecke & Devrient. In 2006, electronic fares replaced metal tokens, which had been used on and off by transit systems in Boston for over a century. Upon introduction in 2007, fares for reloadable CharlieCard contactless smart cards were substantially lower, to encourage riders to use them. The alternative magnetic stripe CharlieTickets were not as durable (and so could only be loaded once), were slower to read, and required maintenance of machines with moving parts. In 2020, the MBTA started implementation of its "Fare Transformation" program, reducing cash-on-board and CharlieTicket prices to the CharlieCard level. In the fall of that year, the agency started upgrading a portion of faregates at all stations to accept only contactless cards, in anticipation of the phase-out of paper CharlieTickets, which occurred on March 31, 2022. The gates also feature an optical reader, which is currently unused but is capable of scanning QR codes or bar codes, such as those generated by the mTicket app. Installation of upgraded fare vending machines was completed in July 2022, allowing riders to purchase CharlieCards and the new tappable CharlieTickets at any rapid transit station. These also serve as fare validation points for proof of payment on the Green Line Extension. As of July 1, 2022, two free transfers will be given to CharlieCard stored-value users for all combinations of subway, bus, and express bus rides. Subway and bus fare history Experimental reduced fare program, "Dime Time", for all persons entering rapid transit stations between 10 a.m. and 1 p.m., Monday through Friday. Extended weekday hours to 2 p.m. and to all day Sunday in 1974. Ended July 31, 1975. Until 2007, not all subway fares were identical – passengers were not charged for boarding outbound Green Line trains at surface stops, while double fares were charged for the outer ends of the Green Line D branch and the Red Line Braintree branch. As part of a general fare hike effective January 1, 2007, the MBTA eliminated these inconsistent fares. Because there was no farebox on the left-facing door, passengers on the 71 and 73 trolleybuses in Cambridge who boarded through that door underground in Harvard station instead paid the only remaining exit fare in the system. This was eliminated starting March 13, 2022, when the trackless trolleys were replaced by conventional buses to allow the Cambridge garage to convert to service battery-electric buses. Commuter Rail Commuter rail fares are on a zone-based system, with fares dependent on the distance from downtown. Rides between Zone 1A stations – South Station, Back Bay, most of the Fairmount Line, and eight other stations within several miles of downtown – cost $2.40, the same as a subway fare with a CharlieCard. Fares for other stations range from $5.75 from Zone 1 (~5–10 miles from downtown) to $14.50 from Zone 10 (~60 miles). All Massachusetts stations are Zone 8 or closer; only T.F. Green Airport and Wickford Junction in Rhode Island are Zone 9 and 10. Interzone fares – for trips that do not go to Zone 1A – are offered at a substantial discount to encourage riders to take the commuter rail for less common commuting patterns for which transit is not usually taken. Discounted monthly passes are available for all trips; 10-ride passes at full price are also available for trips to Zone 1A. All monthly passes include unlimited trips on the subway and local bus; some outer-zone monthlies also offer free use of express buses and ferries. A cash-on-board surcharge of $3.00 is added for trips originating from stations with fare vending machines. Starting in spring 2022, the MBTA began installing fare gates at North Station, South Station, and Back Bay station as part of its "Fare Transformation" project. These three stations are the start and end points of the vast majority of Commuter Rail trips, and the gates eliminate the possibility of passengers boarding without tickets or without having a single-use ticket invalidated (though conductors will still manually verify passengers leave the train in the zone they paid for). A common complaint from monthly pass holders was that on-board conductors would sometimes fail to check any tickets for their car, giving a free ride to single-ride and cash-on-board passengers. The new gates have scanners for bar codes on paper tickets, the mTicket app, Amtrak tickets, and military IDs. They also have a reader for tappable CharlieTickets (and CharlieCards, to prepare for potential future use on the Commuter Rail). MBTA boat The Inner Harbor Ferry costs $3.25 per ride, and is grouped as a Zone 1A monthly commuter rail pass. Single rides cost $8.50 from Hull or Hingham to Boston, $17.00 from Hull or Hingham to Logan Airport, and $13.75 from Boston to Logan Airport. The Ride Fares on The Ride, the MBTA's paratransit program, are structured differently from other modes. Passengers using The Ride must maintain an account with the MBTA in order to pay for service. Fares are $3.35 for "ADA trips" originating within of fixed-route bus or subway service and booked in advance, and $5.60 for "premium trips" outside the mandated area. Discounted fares Discounted fares as well as discounted monthly local bus and subway passes are available to seniors aged 65 and older, and passengers who are permanently disabled who utilize a special photo CharlieCard (called "Senior ID" and "Transportation Access Pass", respectively). Holders of these passes are also entitled to 50% off the Commuter Rail fares. Passengers who are legally blind ride for free on all MBTA services (including express buses and the Commuter Rail) with a "Blind Access Card". Children under 12 ride for free with an adult (up to 2 per adult). Military personnel, state police officers, police officers and firefighters from the MBTA service area, and certain government officials (Commonwealth Department of Public Utilities employees and state elevator inspectors) ride at no charge upon presentation of proper ID, or if dressed in official work uniforms. Middle school and high school students receive the aforementioned discounts on fares. Student discounts require a "Student CharlieCard" or "S-Card" issued through the holder's school which is valid year-round. College students are not generally eligible for reduced fares, but some colleges offer a "Semester Pass" program. A special "Youth Pass" program was introduced in 2017, allowing young adults less than 25 years old who reside in participating cities or towns and are enrolled in specific low income programs to pay reduced fares. Employer and college subsidized Federal law allows employers to deduct the cost of transit passes from wages on a pre-tax basis. Some employers and colleges also choose to subsidize the cost of these passes for employees or students. The MBTA has long had a program that facilitates these bulk purchases for monthly passes. In 2016, it began allowing MIT to subsidize on a per-ride basis, which is considerably cheaper to the institution; this expanded to other employers in 2022. Budget Since the "forward funding" reform in 2000, the MBTA is funded primarily through 16% of the state sales tax excluding the meals tax (with minimum dollar amount guarantee), which is set at 6.25% statewide, and therefore equal to 1% of taxable non-meal purchases statewide. The authority is also funded by passenger fares and formula assessments of the cities and towns in its service area (excepting those which are assessed for the MetroWest Regional Transit Authority). Supplemental income is obtained from its paid parking lots, renting space to retail vendors in and around stations, rents from utility companies using MBTA rights of way, selling surplus land and movable property, advertising on vehicles and properties, and federal operating funding for special programs. A May 2019 report found the MBTA had a maintenance backlog of approximately $10 billion, which it hopes to clear by 2032 by increasing spending on capital projects. The Capital Investment Program is a rolling 5-year plan which programs capital expenses. The draft FY2009-2014 CIP allocates $3,795M, including $879M in projects funded from non-MBTA state sources (required for Clean Air Act compliance), and $299M in projects with one-time federal funding from the American Recovery and Reinvestment Act of 2009. Capital projects are paid for by federal grants, allocations from the general budget of the Commonwealth of Massachusetts (for legal commitments and expansion projects) and MBTA bonds (which are paid off through the operating budget). The FY2014 budget includes $1.422 billion for operating expenses and $443.8M in debt and lease payments. The FY2010 budget was supplemented by $160 million in sales tax revenue when the statewide rate was raised from 5% to 6.25%, to avoid service cuts or a fare increase in a year when deferred debt payments were coming due. Capital improvements and planning process The Boston Metropolitan Planning Organization is responsible for overall regional surface transportation planning. As required by federal law for projects to be eligible for federal funding (except earmarks), the MPO maintains a fiscally constrained 20+ year Regional Transportation Plan for surface transportation expansion, the current edition of which is called Journey to 2030. The required 4-year MPO plan is called the Transportation Improvement Plan. The MBTA maintains its own 25-year capital planning document, called the Program for Mass Transportation, which is fiscally unconstrained. The agency's 4-year plan is called the Capital Improvement Plan; it is the primary mechanism by which money is actually allocated to capital projects. Major capital spending projects must be approved by the MBTA Board, and except for unexpected needs, are usually included in the initial CIP. In addition to federal funds programmed through the Boston MPO, and MBTA capital funds derived from fares, sales tax, municipal assessments, and other minor internal sources, the T receives funding from the Commonwealth of Massachusetts for certain projects. The state may fund items in the State Implementation Plan (SIP) – such as the Big Dig mitigation projects – which is the plan required under the Clean Air Act to reduce air pollution. (, all of Massachusetts is designated as a clean air "non-attainment" zone.) Projects underway and future plans Blue Line There is a proposal to extend the Blue Line northward to Lynn, with two potential extension routes having been identified. One proposed path would run through marshland alongside the existing Newburyport/Rockport commuter rail line, while the other would extend the line along the remainder of the BRB&L right of way. In addition, the MBTA has committed to designing an extension of the line's southern terminus westward to Charles/MGH, where it would connect with the Red Line. This was one of the mitigation measures the Commonwealth of Massachusetts agreed to offset increased automobile emissions from the Big Dig, but it was later replaced in this agreement by other projects. Orange and Red Lines In October 2013, MassDOT announced plans for a $1.3 billion subway car order for the Orange and Red Lines, which would replace and expand the existing car fleets and add more frequent service. The MassDOT Board awarded a $566.6 million contract to a China based manufacturer CNR (which became part of CRRC the following year) to build 404 replacement railcars for the Orange Line and Red Line. The other bidders were Bombardier Transportation, Kawasaki Heavy Industries and Hyundai Rotem. The Board forwent federal funding to allow the contract to specify the cars be built in Massachusetts, in order to create a local railcar manufacturing industry. CNR began assembling the cars at a new manufacturing plant in Springfield, Massachusetts, with initial deliveries expected in 2018 and all cars in service by 2023. However, by the beginning of 2023, only 78 of the promised 152 Orange Line cars had been delivered. In a letter dated December 22, 2022, from the MBTA's Deputy Director of Vehicle Engineering to CRRC, the former complains of "several significant lapses in overall quality management for the Red and Orange Line project" with "no meaningful progress...made by CRRC to address these concerns despite several commitments by CRRC's Management to address these over the period of the last several years" On January 6, 2023, the MBTA announced its intention to keep the older Orange Line cars as "a backup plan". In addition to the new rolling stock, the $1.3 billion allocated for the project will pay for testing, signal improvements and expanded maintenance facilities, as well as other related expenses. Sixty percent of the car's components are sourced from the United States. Replacement of the signal systems, which will increase reliability and allow more frequent trains, was expected to be complete by 2022, with a total cost of $218 million for both lines. As of the end of 2022, the project was described by the MBTA as "48% complete" Commuter rail The first phase of the South Coast Rail project began construction in 2020 and is planned to open in May 2025. It will extend the Middleborough/Lakeville Line to Fall River, and New Bedford. The second phase of the project, planned for 2030, will add a more direct routing via . The MBTA plans to convert the system from diesel-powered commuter rail – which is primarily designed for Boston-centric trips at peak hours – to an electric regional rail system with frequent all-day service. In June 2022, the MBTA indicated plans to purchase battery electric multiple units, with catenary for charging on part of the network. Plans call for electric service on the Providence/Stoughton Line and Fairmount line by 2028–29, followed by the Newburyport/Rockport Line in 2031; all lines would be electrified by 2050. No direct connection exists between the two downtown commuter rail terminals; passengers must use the MBTA subway or other modes to transfer between the two halves of the system. (For non-revenue transfers of equipment, the MBTA and Amtrak use the Grand Junction Branch.) The proposed North–South Rail Link would add a new rail tunnel under downtown Boston to allow through-running service, with new underground stations at South Station, North Station, and possibly a new Central Station. A feasibility study was conducted in 2018. Two other extensions of existing lines have been studied in the 2020s: extension of the Middleborough/Lakeville Line to or , and extension of the Lowell Line to New Hampshire. MBTA Massachusetts Realty Group As one of the most expansive land owners throughout the Commonwealth, the MBTA established a joint public-private management agency for managing the MBTA's vast inventory of property holdings and land. This allows the transit authority to work with entities to obtain right-of-way (ROW) grant on property which the MBTA administers. The agency assists with the processing of all ROW applications as efficiently and economically as possible, and authorizes these grants at the authorized officer's discretion. Generally, the ROW is granted for an additional stream of revenue to the MBTA outside of normal fare revenue. The agency additionally facilitates persons or organizations wanting to provide concessions, or public advertising potential; or the awardance of property easements. Occasionally sale of some surplus under-utilized public space under the MBTA real estate agency's responsibility are disposed of though bidding. This may include lands formerly in use as the state's streetcar network, equipment depots, electric substations, former railroad lines & yards or other properties. Given the vast long-haul rail routes, the MBTA further determined its desire to work with distance providers of telecom or utilities to provide authorization to use pieces of public land for ROW projects, including: renewable energy installs, electric power lines & energy corridors, optical fibre lines, communications sites, road, trail, canal, flume, pipeline or reservoir uses. Management and administration Structure In 2015, Massachusetts Governor Charlie Baker signed new legislation creating a financial control board to oversee the MBTA, replacing the Massachusetts Department of Transportation's Board of Directors in the role of overseeing the transit authority. The Fiscal and Management Control Board (FMCB) started meeting in July 2015 and was charged with bringing financial stability to the agency. It reported to Massachusetts Secretary of Transportation Stephanie Pollack. Three of the five members of the MBTA FMCB were also members of the Massachusetts Department of Transportation. The FMCB's term expired at the end of June 2021 and was not extended. It was dissolved and replaced by a new governing body known simply as the MBTA Board of Directors and consisting of seven members. The Massachusetts Secretary of Transportation leads the executive management team of MassDOT in addition to serving in the Governor's Cabinet. The MBTA's executive management team is led by its General Manager, who is currently also serving as the MassDOT Rail and Transit Administrator, overseeing all public transit in the state. The MBTA Advisory Board represents the cities and towns in the MBTA service district. The municipalities are assessed a total of $143M annually (). In return, the advisory board has veto power over the MBTA operating and capital budgets, including the power to reduce the overall amount. The MBTA is headquartered in the State Transportation Building (10 Park Plaza) in Boston, with the operations control center at 45 High Street. The agency operates service from a number of bus garages, rail yards, and maintenance facilities. The MBTA maintains its own police force, the Massachusetts Bay Transportation Authority Police, which has jurisdiction in MBTA facilities and vehicles. Board of directors The seven members of the board are as follows: Thomas P. Glynn, Chair Monica Tibbits-Nutt, Secretary of Transportation Thomas P. Koch, Mayor of Quincy, Vice Chair Robert Butler Eric L. Goodwine Thomas M. McGee Charlie Sisitsky, Mayor of Framingham Chanda Smart MassDOT Board of Directors The eleven members of the committee are as follows: Monica Tibbits-Nutt, Chair, Secretary (head) of MassDOT Joseph Beggan Ilyas Bhatti Richard A. Dimino Lisa I. Iezzoni Timothy King Thomas Koch Dean Mazzarella Tom McGee Vanessa Otero General managers The list of MBTA general managers is as follows: Thomas McLernon: 1960–1965 Rush B. Lincoln Jr.: 1965–1967 Leo J. Cusick: 1967–1970 Joseph C. Kelly (acting): 1970 Joseph C. Kelly: 1970–1975 Bob Kiley: 1975–1979 (as chairman/CEO) Robert Foster: 1979–1980 (as chairman/CEO) Barry Locke: 1980–1981 (as chairman/CEO) James O'Leary: 1981–1989 Thomas P. Glynn: 1989–1991 John J. Haley Jr.: 1991–1995 Patrick Moynihan: 1995–1997 Robert H. Prince: 1997–2001 Michael H. Mulhern: 2002–2005 Daniel Grabauskas: 2005–2009 Richard A. Davey: 2010–2011 Jonathan Davis (interim): 2011–2012 Beverly A. Scott: 2012–2015 Frank DePaola (interim): 2015–2016 Brian Shortsleeve (acting): 2016–2017 Steve Poftak (interim): 2017–2017 Luis Manuel Ramírez: 2017–2018 Jeff Gonneville (interim): 2018–2018 Steve Poftak: 2019–2022 Jeff Gonneville (interim): 2023–2023 Phillip Eng: April 10, 2023–present Employees and unions , the MBTA employs 6,346 workers, of which roughly 600 are in part-time jobs. Many MBTA employees are represented by unions, with a growing number of full-time non-union contractors. The largest union of the MBTA is the Carmen's Union (Local 589), representing bus and subway operators. This includes full and part-time bus drivers, motorpersons and streetcar motorpersons, full and part-time train attendants, and Customer Service Agents (CSAs). Further unions include the Machinists Union, Local 264; Electrical Workers Union, Local 717; the Welder's Union, Local 651; the Executive Union; the Office and Professional Employees International Union, Local 453; the Professional and Technical Engineers Union, Local 105; and the Office and Professional Employees Union, Local 6. Within the authority, employees are ranked according to seniority (or "rating"). This is categorized by an employee's five or six-digit badge number, though some of the longest serving employees still have only three or four-digits. An employee's badge number indicates the relative length of employment with the MBTA; badges are issued in sequential order. The rating structure determines many different things, including the rank in which perks are to be offered to employee, such as: When offering the choice for quarter-annual route assignments ("picks"), overtime offerings, and even the rank to transfer new hires from part-time roles to a full-time role. In popular culture In 1951, the growing subway network was the setting of "A Subway Named Mobius", a science fiction short story written by the American astronomer Armin Joseph Deutsch. The tale described a Boston subway train which accidentally became a "phantom" by becoming lost in the fourth dimension, analogous to a topological Mobius strip. In 2001, a half-century later, the narrative was nominated for a Retro Hugo Award for Best Short Story at the World Science Fiction Convention. In 1959, the satirical song "M.T.A." (informally known as "Charlie on the MTA") was a hit single, as performed by the folksingers the Kingston Trio. It tells the absurd story of a passenger named Charlie, who cannot pay a newly imposed 5-cent exit fare, and thus remains trapped in the subway system.
Technology
United States
null
19870
https://en.wikipedia.org/wiki/Meson
Meson
In particle physics, a meson () is a type of hadronic subatomic particle composed of an equal number of quarks and antiquarks, usually one of each, bound together by the strong interaction. Because mesons are composed of quark subparticles, they have a meaningful physical size, a diameter of roughly one femtometre (10 m), which is about 0.6 times the size of a proton or neutron. All mesons are unstable, with the longest-lived lasting for only a few tenths of a nanosecond. Heavier mesons decay to lighter mesons and ultimately to stable electrons, neutrinos and photons. Outside the nucleus, mesons appear in nature only as short-lived products of very high-energy collisions between particles made of quarks, such as cosmic rays (high-energy protons and neutrons) and baryonic matter. Mesons are routinely produced artificially in cyclotrons or other particle accelerators in the collisions of protons, antiprotons, or other particles. Higher-energy (more massive) mesons were created momentarily in the Big Bang, but are not thought to play a role in nature today. However, such heavy mesons are regularly created in particle accelerator experiments that explore the nature of the heavier quarks that compose the heavier mesons. Mesons are part of the hadron particle family, which are defined simply as particles composed of two or more quarks. The other members of the hadron family are the baryons: subatomic particles composed of odd numbers of valence quarks (at least three), and some experiments show evidence of exotic mesons, which do not have the conventional valence quark content of two quarks (one quark and one antiquark), but four or more. Because quarks have a spin , the difference in quark number between mesons and baryons results in conventional two-quark mesons being bosons, whereas baryons are fermions. Each type of meson has a corresponding antiparticle (antimeson) in which quarks are replaced by their corresponding antiquarks and vice versa. For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding antiparticle, the negative pion (), is made of one up antiquark and one down quark. Because mesons are composed of quarks, they participate in both the weak interaction and strong interaction. Mesons with net electric charge also participate in the electromagnetic interaction. Mesons are classified according to their quark content, total angular momentum, parity and various other properties, such as C-parity and G-parity. Although no meson is stable, those of lower mass are nonetheless more stable than the more massive, and hence are easier to observe and study in particle accelerators or in cosmic ray experiments. The lightest group of mesons is less massive than the lightest group of baryons, meaning that they are more easily produced in experiments, and thus exhibit certain higher-energy phenomena more readily than do baryons. But mesons can be quite massive: for example, the J/Psi meson () containing the charm quark, first seen 1974, is about three times as massive as a proton, and the upsilon meson () containing the bottom quark, first seen in 1977, is about ten times as massive as a proton. History From theoretical considerations, in 1934 Hideki Yukawa predicted the existence and the approximate mass of the "meson" as the carrier of the nuclear force that holds atomic nuclei together. If there were no nuclear force, all nuclei with two or more protons would fly apart due to electromagnetic repulsion. Yukawa called his carrier particle the meson, from μέσος mesos, the Greek word for "intermediate", because its predicted mass was between that of the electron and that of the proton, which has about 1,836 times the mass of the electron. Yukawa or Carl David Anderson, who discovered the muon, had originally named the particle the "mesotron", but he was corrected by the physicist Werner Heisenberg (whose father was a professor of Greek at the University of Munich). Heisenberg pointed out that there is no "tr" in the Greek word "mesos". The first candidate for Yukawa's meson, in modern terminology known as the muon, was discovered in 1936 by Carl David Anderson and others in the decay products of cosmic ray interactions. The "mu meson" had about the right mass to be Yukawa's carrier of the strong nuclear force, but over the course of the next decade, it became evident that it was not the right particle. It was eventually found that the "mu meson" did not participate in the strong nuclear interaction at all, but rather behaved like a heavy version of the electron, and was eventually classed as a lepton like the electron, rather than a meson. Physicists in making this choice decided that properties other than particle mass should control their classification. There were years of delays in the subatomic particle research during World War II (1939–1945), with most physicists working in applied projects for wartime necessities. When the war ended in August 1945, many physicists gradually returned to peacetime research. The first true meson to be discovered was what would later be called the "pi meson" (or pion). During 1939–1942, Debendra Mohan Bose and Bibha Chowdhuri exposed Ilford half-tone photographic plates in the high altitude mountainous regions of Darjeeling, and observed long curved ionizing tracks that appeared to be different from the tracks of alpha particles or protons. In a series of articles published in Nature, they identified a cosmic particle having an average mass close to 200 times the mass of electron. This discovery was made in 1947 with improved full-tone photographic emulsion plates, by Cecil Powell, Hugh Muirhead, César Lattes, and Giuseppe Occhialini, who were investigating cosmic ray products at the University of Bristol in England, based on photographic films placed in the Andes mountains. Some of those mesons had about the same mass as the already-known mu "meson", yet seemed to decay into it, leading physicist Robert Marshak to hypothesize in 1947 that it was actually a new and different meson. Over the next few years, more experiments showed that the pion was indeed involved in strong interactions. The pion (as a virtual particle) is also used as force carrier to model the nuclear force in atomic nuclei (between protons and neutrons). This is an approximation, as the actual carrier of the strong force is believed to be the gluon, which is explicitly used to model strong interaction between quarks. Other mesons, such as the virtual rho mesons are used to model this force as well, but to a lesser extent. Following the discovery of the pion, Yukawa was awarded the 1949 Nobel Prize in Physics for his predictions. For a while in the past, the word meson was sometimes used to mean any force carrier, such as "the Z meson", which is involved in mediating the weak interaction. However, this use has fallen out of favor, and mesons are now defined as particles composed of pairs of quarks and antiquarks. Overview Spin, orbital angular momentum, and total angular momentum Spin (quantum number ) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of  . Quarks are fermions—specifically in this case, particles having spin Because spin projections vary in increments of 1 (that is 1 ), a single quark has a spin vector of length , and has two spin projections, either or Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length with three possible spin projections and and their combination is called a vector meson or spin-1 triplet. If two quarks have oppositely aligned spins, the spin vectors add up to make a vector of length and only one spin projection called a scalar meson or spin-0 singlet. Because mesons are made of one quark and one antiquark, they are found in triplet and singlet spin states. The latter are called scalar mesons or pseudoscalar mesons, depending on their parity (see below). There is another quantity of quantized angular momentum, called the orbital angular momentum (quantum number ), that is the angular momentum due to quarks orbiting each other, and also comes in increments of 1 . The total angular momentum (quantum number ) of a particle is the combination of the two intrinsic angular momentums (spin) and the orbital angular momentum. It can take any value from up to in increments of 1. Particle physicists are most interested in mesons with no orbital angular momentum ( = 0), therefore the two groups of mesons most studied are the  = 1;  = 0 and  = 0;  = 0, which corresponds to  = 1 and  = 0, although they are not the only ones. It is also possible to obtain  = 1 particles from  = 0 and  = 1. How to distinguish between the  = 1,  = 0 and  = 0,  = 1 mesons is an active area of research in meson spectroscopy. -parity -parity is left-right parity, or spatial parity, and was the first of several "parities" discovered, and so is often called just "parity". If the universe were reflected in a mirror, most laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called parity (). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (-violation). Based on this, one might think that, if the wavefunction for each particle (more precisely, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: In order for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity ( = −1, or alternatively  = −), whereas the other particles are said to have positive or even parity ( = +1, or alternatively  = +). For mesons, parity is related to the orbital angular momentum by the relation: where the is a result of the parity of the corresponding spherical harmonic of the wavefunction. The "+1" comes from the fact that, according to the Dirac equation, a quark and an antiquark have opposite intrinsic parities. Therefore, the intrinsic parity of a meson is the product of the intrinsic parities of the quark (+1) and antiquark (−1). As these are different, their product is −1, and so it contributes the "+1" that appears in the exponent. As a consequence, all mesons with no orbital angular momentum ( = 0) have odd parity ( = −1). C-parity -parity is only defined for mesons that are their own antiparticle (i.e. neutral mesons). It represents whether or not the wavefunction of the meson remains the same under the interchange of their quark with their antiquark. If then, the meson is " even" ( = +1). On the other hand, if then the meson is " odd" ( = −1). -parity rarely is studied on its own, but more commonly in combination with P-parity into CP-parity. -parity was originally thought to be conserved, but was later found to be violated on rare occasions in weak interactions. -parity -parity is a generalization of the -parity. Instead of simply comparing the wavefunction after exchanging quarks and antiquarks, it compares the wavefunction after exchanging the meson for the corresponding antimeson, regardless of quark content. If then, the meson is " even" ( = +1). On the other hand, if then the meson is " odd" ( = −1). Isospin and charge Original isospin model The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction. Although they had different electric charges, their masses were so similar that physicists believed that they were actually the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin. This unknown excitation was later dubbed isospin by Eugene Wigner in 1937. When the first mesons were discovered, they too were seen through the eyes of isospin and so the three pions were believed to be the same particle, but in different isospin states. The mathematics of isospin was modeled after the mathematics of spin. Isospin projections varied in increments of 1 just like those of spin, and to each projection was associated a "charged state". Because the "pion particle" had three "charged states", it was said to be of isospin Its "charged states" , , and , corresponded to the isospin projections and respectively. Another example is the "rho particle", also with three charged states. Its "charged states" , , and , corresponded to the isospin projections and respectively. Replacement by the quark model This belief lasted until Murray Gell-Mann proposed the quark model in 1964 (containing originally only the , , and quarks). The success of the isospin model is now understood to be an artifact of the similar masses of the and quarks. Because the and quarks have similar masses, particles made of the same number of them also have similar masses. The exact and quark composition determines the charge, because quarks carry charge whereas quarks carry charge . For example, the three pions all have different charges = a quantum superposition of ) and states but they all have similar masses ( ) as they are each composed of a same total number of up and down quarks and antiquarks. Under the isospin model, they were considered a single particle in different charged states. After the quark model was adopted, physicists noted that the isospin projections were related to the up and down quark content of particles by the relation where the -symbols are the count of up and down quarks and antiquarks. In the "isospin picture", the three pions and three rhos were thought to be the different states of two particles. However, in the quark model, the rhos are excited states of pions. Isospin, although conveying an inaccurate picture of things, is still used to classify hadrons, leading to unnatural and often confusing nomenclature. Because mesons are hadrons, the isospin classification is also used for them all, with the quantum number calculated by adding for each positively charged up-or-down quark-or-antiquark (up quarks and down antiquarks), and for each negatively charged up-or-down quark-or-antiquark (up antiquarks and down quarks). Flavour quantum numbers The strangeness quantum number S (not to be confused with spin) was noticed to go up and down along with particle mass. The higher the mass, the lower (more negative) the strangeness (the more s quarks). Particles could be described with isospin projections (related to charge) and strangeness (mass) (see the uds nonet figures). As other quarks were discovered, new quantum numbers were made to have similar description of udc and udb nonets. Because only the u and d mass are similar, this description of particle mass and charge in terms of isospin and flavour quantum numbers only works well for the nonets made of one u, one d and one other quark and breaks down for the other nonets (for example ucb nonet). If the quarks all had the same mass, their behaviour would be called symmetric, because they would all behave in exactly the same way with respect to the strong interaction. However, as quarks do not have the same mass, they do not interact in the same way (exactly like an electron placed in an electric field will accelerate more than a proton placed in the same field because of its lighter mass), and the symmetry is said to be broken. It was noted that charge (Q) was related to the isospin projection (I3), the baryon number (B) and flavour quantum numbers (S, C, , T) by the Gell-Mann–Nishijima formula: where S, C, , and T represent the strangeness, charm, bottomness and topness flavour quantum numbers respectively. They are related to the number of strange, charm, bottom, and top quarks and antiquark according to the relations: meaning that the Gell-Mann–Nishijima formula is equivalent to the expression of charge in terms of quark content: Classification Mesons are classified into groups according to their isospin (I), total angular momentum (J), parity (P), G-parity (G) or C-parity (C) when applicable, and quark (q) content. The rules for classification are defined by the Particle Data Group, and are rather convoluted. The rules are presented below, in table form for simplicity. Types of meson Mesons are classified into types according to their spin configurations. Some specific configurations are given special names based on the mathematical properties of their spin configuration. Nomenclature Flavourless mesons Flavourless mesons are mesons made of pair of quark and antiquarks of the same flavour (all their flavour quantum numbers are zero: = 0, = 0, = 0, = 0). The rules for flavourless mesons are: In addition When the spectroscopic state of the meson is known, it is added in parentheses. When the spectroscopic state is unknown, mass (in MeV/c2) is added in parentheses. When the meson is in its ground state, nothing is added in parentheses. Flavoured mesons Flavoured mesons are mesons made of pair of quark and antiquarks of different flavours. The rules are simpler in this case: The main symbol depends on the heavier quark, the superscript depends on the charge, and the subscript (if any) depends on the lighter quark. In table form, they are: In addition If P is in the "normal series" (i.e., P = 0+, 1−, 2+, 3−, ...), a superscript ∗ is added. If the meson is not pseudoscalar (P = 0−) or vector (P = 1−), is added as a subscript. When the spectroscopic state of the meson is known, it is added in parentheses. When the spectroscopic state is unknown, mass (in MeV/c2) is added in parentheses. When the meson is in its ground state, nothing is added in parentheses. Exotic mesons There is experimental evidence for particles that are hadrons (i.e., are composed of quarks) and are color-neutral with zero baryon number, and thus by conventional definition are mesons. Yet, these particles do not consist of a single quark/antiquark pair, as all the other conventional mesons discussed above do. A tentative category for these particles is exotic mesons. There are at least five exotic meson resonances that have been experimentally confirmed to exist by two or more independent experiments. The most statistically significant of these is the Z(4430), discovered by the Belle experiment in 2007 and confirmed by LHCb in 2014. It is a candidate for being a tetraquark: a particle composed of two quarks and two antiquarks. See the main article above for other particle resonances that are candidates for being exotic mesons. List Pseudoscalar mesons [a] Makeup inexact due to non-zero quark masses. [b] PDG reports the resonance width (Γ). Here the conversion τ =  is given instead. [c] Strong eigenstate. No definite lifetime (see kaon notes below) [d] The mass of the and are given as that of the . However, it is known that a difference between the masses of the and on the order of exists. [e] Weak eigenstate. Makeup is missing small CP–violating term (see notes on neutral kaons below). Vector mesons [f] PDG reports the resonance width (Γ). Here the conversion τ =  is given instead. [g] The exact value depends on the method used. See the given reference for detail.
Physical sciences
Fermions
null
19873
https://en.wikipedia.org/wiki/Measure%20%28mathematics%29
Measure (mathematics)
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as magnitude, mass, and probability of events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations (such as spectral measures and projection-valued measures) of measure are widely used in quantum physics and physics in general. The intuition behind this concept dates back to ancient Greece, when Archimedes tried to calculate the area of a circle. But it was not until the late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works of Émile Borel, Henri Lebesgue, Nikolai Luzin, Johann Radon, Constantin Carathéodory, and Maurice Fréchet, among others. Definition Let be a set and a σ-algebra over A set function from to the extended real number line is called a measure if the following conditions hold: Non-negativity: For all Countable additivity (or σ-additivity): For all countable collections of pairwise disjoint sets in Σ, If at least one set has finite measure, then the requirement is met automatically due to countable additivity: and therefore If the condition of non-negativity is dropped, and takes on at most one of the values of then is called a signed measure. The pair is called a measurable space, and the members of are called measurable sets. A triple is called a measure space. A probability measure is a measure with total measure one – that is, A probability space is a measure space with a probability measure. For measure spaces that are also topological spaces various compatibility conditions can be placed for the measure and the topology. Most measures met in practice in analysis (and in many cases also in probability theory) are Radon measures. Radon measures have an alternative definition in terms of linear functionals on the locally convex topological vector space of continuous functions with compact support. This approach is taken by Bourbaki (2004) and a number of other sources. For more details, see the article on Radon measures. Instances Some important measures are listed here. The counting measure is defined by = number of elements in The Lebesgue measure on is a complete translation-invariant measure on a σ-algebra containing the intervals in such that ; and every other measure with these properties extends the Lebesgue measure. Circular angle measure is invariant under rotation, and hyperbolic angle measure is invariant under squeeze mapping. The Haar measure for a locally compact topological group is a generalization of the Lebesgue measure (and also of counting measure and circular angle measure) and has similar uniqueness properties. Every (pseudo) Riemannian manifold has a canonical measure that in local coordinates looks like where is the usual Lebesgue measure. The Hausdorff measure is a generalization of the Lebesgue measure to sets with non-integer dimension, in particular, fractal sets. Every probability space gives rise to a measure which takes the value 1 on the whole space (and therefore takes all its values in the unit interval [0, 1]). Such a measure is called a probability measure or distribution. See the list of probability distributions for instances. The Dirac measure δa (cf. Dirac delta function) is given by δa(S) = χS(a), where χS is the indicator function of The measure of a set is 1 if it contains the point and 0 otherwise. Other 'named' measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Gaussian measure, Baire measure, Radon measure, Young measure, and Loeb measure. In physics an example of a measure is spatial distribution of mass (see for example, gravity potential), or another non-negative extensive property, conserved (see conservation law for a list of these) or not. Negative values lead to signed measures, see "generalizations" below. Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics. Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble. Measure theory is used in machine learning. One example is the Flow Induced Probability Measure in GFlowNet. Basic properties Let be a measure. Monotonicity If and are measurable sets with then Measure of countable unions and intersections Countable subadditivity For any countable sequence of (not necessarily disjoint) measurable sets in Continuity from below If are measurable sets that are increasing (meaning that ) then the union of the sets is measurable and Continuity from above If are measurable sets that are decreasing (meaning that ) then the intersection of the sets is measurable; furthermore, if at least one of the has finite measure then This property is false without the assumption that at least one of the has finite measure. For instance, for each let which all have infinite Lebesgue measure, but the intersection is empty. Other properties Completeness A measurable set is called a null set if A subset of a null set is called a negligible set. A negligible set need not be measurable, but every measurable negligible set is automatically a null set. A measure is called complete if every negligible set is measurable. A measure can be extended to a complete one by considering the σ-algebra of subsets which differ by a negligible set from a measurable set that is, such that the symmetric difference of and is contained in a null set. One defines to equal "Dropping the Edge" If is -measurable, then for almost all This property is used in connection with Lebesgue integral. Additivity Measures are required to be countably additive. However, the condition can be strengthened as follows. For any set and any set of nonnegative define: That is, we define the sum of the to be the supremum of all the sums of finitely many of them. A measure on is -additive if for any and any family of disjoint sets the following hold: The second condition is equivalent to the statement that the ideal of null sets is -complete. Sigma-finite measures A measure space is called finite if is a finite real number (rather than ). Nonzero finite measures are analogous to probability measures in the sense that any finite measure is proportional to the probability measure A measure is called σ-finite if can be decomposed into a countable union of measurable sets of finite measure. Analogously, a set in a measure space is said to have a σ-finite measure if it is a countable union of sets with finite measure. For example, the real numbers with the standard Lebesgue measure are σ-finite but not finite. Consider the closed intervals for all integers there are countably many such intervals, each has measure 1, and their union is the entire real line. Alternatively, consider the real numbers with the counting measure, which assigns to each finite set of reals the number of points in the set. This measure space is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. The σ-finite measure spaces have some very convenient properties; σ-finiteness can be compared in this respect to the Lindelöf property of topological spaces. They can be also thought of as a vague generalization of the idea that a measure space may have 'uncountable measure'. Strictly localizable measures Semifinite measures Let be a set, let be a sigma-algebra on and let be a measure on We say is semifinite to mean that for all Semifinite measures generalize sigma-finite measures, in such a way that some big theorems of measure theory that hold for sigma-finite but not arbitrary measures can be extended with little modification to hold for semifinite measures. (To-do: add examples of such theorems; cf. the talk page.) Basic examples Every sigma-finite measure is semifinite. Assume let and assume for all We have that is sigma-finite if and only if for all and is countable. We have that is semifinite if and only if for all Taking above (so that is counting measure on ), we see that counting measure on is sigma-finite if and only if is countable; and semifinite (without regard to whether is countable). (Thus, counting measure, on the power set of an arbitrary uncountable set gives an example of a semifinite measure that is not sigma-finite.) Let be a complete, separable metric on let be the Borel sigma-algebra induced by and let Then the Hausdorff measure is semifinite. Let be a complete, separable metric on let be the Borel sigma-algebra induced by and let Then the packing measure is semifinite. Involved example The zero measure is sigma-finite and thus semifinite. In addition, the zero measure is clearly less than or equal to It can be shown there is a greatest measure with these two properties: We say the semifinite part of to mean the semifinite measure defined in the above theorem. We give some nice, explicit formulas, which some authors may take as definition, for the semifinite part: Since is semifinite, it follows that if then is semifinite. It is also evident that if is semifinite then Non-examples Every measure that is not the zero measure is not semifinite. (Here, we say measure to mean a measure whose range lies in : ) Below we give examples of measures that are not zero measures. Let be nonempty, let be a -algebra on let be not the zero function, and let It can be shown that is a measure. Let be uncountable, let be a -algebra on let be the countable elements of and let It can be shown that is a measure. Involved non-example We say the part of to mean the measure defined in the above theorem. Here is an explicit formula for : Results regarding semifinite measures Let be or and let Then is semifinite if and only if is injective. (This result has import in the study of the dual space of .) Let be or and let be the topology of convergence in measure on Then is semifinite if and only if is Hausdorff. (Johnson) Let be a set, let be a sigma-algebra on let be a measure on let be a set, let be a sigma-algebra on and let be a measure on If are both not a measure, then both and are semifinite if and only if for all and (Here, is the measure defined in Theorem 39.1 in Berberian '65.) Localizable measures Localizable measures are a special case of semifinite measures and a generalization of sigma-finite measures. Let be a set, let be a sigma-algebra on and let be a measure on Let be or and let Then is localizable if and only if is bijective (if and only if "is" ). s-finite measures A measure is said to be s-finite if it is a countable sum of finite measures. S-finite measures are more general than sigma-finite ones and have applications in the theory of stochastic processes. Non-measurable sets If the axiom of choice is assumed to be true, it can be proved that not all subsets of Euclidean space are Lebesgue measurable; examples of such sets include the Vitali set, and the non-measurable sets postulated by the Hausdorff paradox and the Banach–Tarski paradox. Generalizations For certain purposes, it is useful to have a "measure" whose values are not restricted to the non-negative reals or infinity. For instance, a countably additive set function with values in the (signed) real numbers is called a signed measure, while such a function with values in the complex numbers is called a complex measure. Observe, however, that complex measure is necessarily of finite variation, hence complex measures include finite signed measures but not, for example, the Lebesgue measure. Measures that take values in Banach spaces have been studied extensively. A measure that takes values in the set of self-adjoint projections on a Hilbert space is called a projection-valued measure; these are used in functional analysis for the spectral theorem. When it is necessary to distinguish the usual measures which take non-negative values from generalizations, the term positive measure is used. Positive measures are closed under conical combination but not general linear combination, while signed measures are the linear closure of positive measures. Another generalization is the finitely additive measure, also known as a content. This is the same as a measure except that instead of requiring countable additivity we require only finite additivity. Historically, this definition was used first. It turns out that in general, finitely additive measures are connected with notions such as Banach limits, the dual of and the Stone–Čech compactification. All these are linked in one way or another to the axiom of choice. Contents remain useful in certain technical problems in geometric measure theory; this is the theory of Banach measures. A charge is a generalization in both directions: it is a finitely additive, signed measure. (Cf. ba space for information about bounded charges, where we say a charge is bounded to mean its range its a bounded subset of R.)
Mathematics
Mathematical analysis
null
19876
https://en.wikipedia.org/wiki/Motorcycle
Motorcycle
A motorcycle (motorbike, bike, or, if three-wheeled, a trike) is a two or three-wheeled motor vehicle steered by a handlebar from a saddle-style seat. Motorcycle designs vary greatly to suit a range of different purposes: long-distance travel, commuting, cruising, sport (including racing), and off-road riding. Motorcycling is riding a motorcycle and being involved in other related social activities such as joining a motorcycle club and attending motorcycle rallies. The 1885 Daimler Reitwagen made by Gottlieb Daimler and Wilhelm Maybach in Germany was the first internal combustion, petroleum-fueled motorcycle. In 1894, Hildebrand & Wolfmüller became the first series production motorcycle. Globally, motorcycles are comparable numerically to cars as a method of transport: in 2021, approximately 58.6 million new motorcycles were sold around the world, while 66.7 million cars were sold over the same period. In 2022, the top four motorcycle producers by volume and type were Honda, Yamaha, Kawasaki, and Suzuki. According to the US Department of Transportation, the number of fatalities per vehicle mile traveled was 37 times higher for motorcycles than for cars. Types The term motorcycle has different legal definitions depending on jurisdiction (see ). There are three major types of motorcycle: street, off-road, and dual purpose. Within these types, there are many sub-types of motorcycles for different purposes. There is often a racing counterpart to each type, such as road racing and street bikes, or motocross including dirt bikes. Street bikes include cruisers, sportbikes, scooters and mopeds, and many other types. Off-road motorcycles include many types designed for dirt-oriented racing classes such as motocross and are not street legal in most areas. Dual purpose machines like the dual-sport style are made to go off-road but include features to make them legal and comfortable on the street as well. Each configuration offers either specialised advantage or broad capability, and each design creates a different riding posture. In some countries the use of pillions (rear seats) is restricted. History Experimentation and invention The first internal combustion, petroleum fueled motorcycle was the Daimler Reitwagen. It was designed and built by the German inventors Gottlieb Daimler and Wilhelm Maybach in Bad Cannstatt, Germany, in 1885. This vehicle was unlike either the safety bicycles or the boneshaker bicycles of the era in that it had zero degrees of steering axis angle and no fork offset, and thus did not use the principles of bicycle and motorcycle dynamics developed nearly 70 years earlier. Instead, it relied on two outrigger wheels to remain upright while turning. The inventors called their invention the Reitwagen ("riding car"). It was designed as an expedient testbed for their new engine, rather than a true prototype vehicle. The first commercial design for a self-propelled cycle was a three-wheel design called the Butler Petrol Cycle, conceived of Edward Butler in England in 1884. He exhibited his plans for the vehicle at the Stanley Cycle Show in London in 1884. The vehicle was built by the Merryweather Fire Engine company in Greenwich, in 1888. The Butler Petrol Cycle was a three-wheeled vehicle, with the rear wheel directly driven by a , displacement, bore × stroke, flat twin four-stroke engine (with magneto ignition replaced by coil and battery) equipped with rotary valves and a float-fed carburettor (five years before Maybach) and Ackermann steering, all of which were state of the art at the time. Starting was by compressed air. The engine was liquid-cooled, with a radiator over the rear driving wheel. Speed was controlled by means of a throttle valve lever. No braking system was fitted; the vehicle was stopped by raising and lowering the rear driving wheel using a foot-operated lever; the weight of the machine was then borne by two small castor wheels. The driver was seated between the front wheels. It was not, however, a success, as Butler failed to find sufficient financial backing. Many authorities have excluded steam powered, electric motorcycles or diesel-powered two-wheelers from the definition of a 'motorcycle', and credit the Daimler Reitwagen as the world's first motorcycle. Given the rapid rise in use of electric motorcycles worldwide, defining only internal-combustion powered two-wheelers as 'motorcycles' is increasingly problematic. The first (petroleum fueled) internal-combustion motorcycles, like the German Reitwagen, were, however, also the first practical motorcycles. If a two-wheeled vehicle with steam propulsion is considered a motorcycle, then the first motorcycles built seem to be the French Michaux-Perreaux steam velocipede which patent application was filed in December 1868, constructed around the same time as the American Roper steam velocipede, built by Sylvester H. Roper of Roxbury, Massachusetts, who had been demonstrating his machine at fairs and circuses in the eastern U.S. since 1867. Roper built about 10 steam cars and cycles from the 1860s until his death in 1896. Summary of early inventions First motorcycle companies In 1894, Hildebrand & Wolfmüller became the first series production motorcycle, and the first to be called a motorcycle (). Excelsior Motor Company, originally a bicycle manufacturing company based in Coventry, England, began production of their first motorcycle model in 1896. The first production motorcycle in the US was the Orient-Aster, built by Charles Metz in 1898 at his factory in Waltham, Massachusetts. In the early period of motorcycle history, many producers of bicycles adapted their designs to accommodate the new internal combustion engine. As the engines became more powerful and designs outgrew the bicycle origins, the number of motorcycle producers increased. Many of the nineteenth-century inventors who worked on early motorcycles often moved on to other inventions. Daimler and Roper, for example, both went on to develop automobiles. At the end of the 19th century the first major mass-production firms were set up. In 1898, Triumph Motorcycles in England began producing motorbikes, and by 1903 it was producing over 500 bikes. Other British firms were Royal Enfield, Norton, Douglas Motorcycles and Birmingham Small Arms Company who began motorbike production in 1899, 1902, 1907 and 1910, respectively. Indian began production in 1901 and Harley-Davidson was established two years later. By the outbreak of World War I, the largest motorcycle manufacturer in the world was Indian, producing over 20,000 bikes per year. First World War During the First World War, motorbike production was greatly ramped up for the war effort to supply effective communications with front line troops. Messengers on horses were replaced with despatch riders on motorcycles carrying messages, performing reconnaissance and acting as a military police. American company Harley-Davidson was devoting over 50% of its factory output toward military contract by the end of the war. The British company Triumph Motorcycles sold more than 30,000 of its Triumph Type H model to allied forces during the war. With the rear wheel driven by a belt, the Model H was fitted with a air-cooled four-stroke single-cylinder engine. It was also the first Triumph without pedals. The Model H in particular, is regarded by many as having been the first "modern motorcycle". Introduced in 1915 it had a 550 cc side-valve four-stroke engine with a three-speed gearbox and belt transmission. It was so popular with its users that it was nicknamed the "Trusty Triumph". Postwar By 1920, Harley-Davidson was the largest manufacturer, with their motorcycles being sold by dealers in 67 countries. Amongst many British motorcycle manufacturers, Chater-Lea with its twin-cylinder models followed by its large singles in the 1920s stood out. Initially, using converted a Woodmann-designed OHV Blackburne engine it became the first 350 cc to exceed , recording over the flying kilometre during April 1924.[7] Later, Chater-Lea set a world record for the flying kilometre for 350 cc and 500 cc motorcycles at for the firm. Chater-Lea produced variants of these world-beating sports models and became popular among racers at the Isle of Man TT. Today, the firm is probably best remembered for its long-term contract to manufacture and supply AA Patrol motorcycles and sidecars. By the late 1920s or early 1930s, DKW in Germany took over as the largest manufacturer. In the 1950s, streamlining began to play an increasing part in the development of racing motorcycles and the "dustbin fairing" held out the possibility of radical changes to motorcycle design. NSU and Moto Guzzi were in the vanguard of this development, both producing very radical designs well ahead of their time. NSU produced the most advanced design, but after the deaths of four NSU riders in the 1954–1956 seasons, they abandoned further development and quit Grand Prix motorcycle racing. Moto Guzzi produced competitive race machines, and until the end of 1957 had a succession of victories. The following year, 1958, full enclosure fairings were banned from racing by the FIM in the light of the safety concerns. From the 1960s through the 1990s, small two-stroke motorcycles were popular worldwide, partly as a result of East German MZs Walter Kaaden's engine work in the 1950s. Today In the 21st century, the motorcycle industry is mainly dominated by Indian and Japanese motorcycle companies. In addition to the large capacity motorcycles, there is a large market in smaller capacity (less than 300 cc) motorcycles, mostly concentrated in Asian and African countries and produced in China and India. A Japanese example is the 1958 Honda Super Cub, which went on to become the biggest selling vehicle of all time, with its 60 millionth unit produced in April 2008. Today, this area is dominated by mostly Indian companies with Hero MotoCorp emerging as the world's largest manufacturer of two wheelers. Its Splendor model has sold more than 8.5 million to date. Other major producers are Bajaj and TVS Motors. Technical aspects Construction Motorcycle construction is the engineering, manufacturing, and assembly of components and systems for a motorcycle which results in the performance, cost, and aesthetics desired by the designer. With some exceptions, construction of modern mass-produced motorcycles has standardised on a steel or aluminium frame, telescopic forks holding the front wheel, and disc brakes. Some other body parts, designed for either aesthetic or performance reasons may be added. A petrol-powered engine typically consisting of between one and four cylinders (and less commonly, up to eight cylinders) coupled to a manual five- or six-speed sequential transmission drives the swingarm-mounted rear wheel by a chain, driveshaft, or belt. The repair can be done using a motorcycle lift. Fuel economy Motorcycle fuel economy varies greatly with engine displacement and riding style. A streamlined, fully faired Matzu Matsuzawa Honda XL125 achieved in the Craig Vetter Fuel Economy Challenge "on real highways in real conditions". Due to low engine displacements (), and high power-to-mass ratios, motorcycles offer good fuel economy. Under conditions of fuel scarcity like 1950s Britain and modern developing nations, motorcycles claim large shares of the vehicle market. In the United States, the average motorcycle fuel economy is 44 miles per US gallon (19 km per liter). Electric motorcycles Very high fuel economy equivalents are often derived by electric motorcycles. Electric motorcycles are nearly silent, zero-emission electric motor-driven vehicles. Operating range and top speed are limited by battery technology. Fuel cells and petroleum-electric hybrids are also under development to extend the range and improve performance of the electric drive system. Reliability A 2013 survey of 4,424 readers of the US Consumer Reports magazine collected reliability data on 4,680 motorcycles purchased new from 2009 to 2012. The most common problem areas were accessories, brakes, electrical (including starters, charging, ignition), and fuel systems, and the types of motorcycles with the greatest problems were touring, off-road/dual sport, sport-touring, and cruisers. There were not enough sport bikes in the survey for a statistically significant conclusion, though the data hinted at reliability as good as cruisers. These results may be partially explained by accessories including such equipment as fairings, luggage, and auxiliary lighting, which are frequently added to touring, adventure touring/dual sport and sport touring bikes. Trouble with fuel systems is often the result of improper winter storage, and brake problems may also be due to poor maintenance. Of the five brands with enough data to draw conclusions, Honda, Kawasaki and Yamaha were statistically tied, with 11 to 14% of those bikes in the survey experiencing major repairs. Harley-Davidsons had a rate of 24%, while BMWs did worse, with 30% of those needing major repairs. There were not enough Triumph and Suzuki motorcycles surveyed for a statistically sound conclusion, though it appeared Suzukis were as reliable as the other three Japanese brands while Triumphs were comparable to Harley-Davidson and BMW. Three-fourths of the repairs in the survey cost less than US$200 and two-thirds of the motorcycles were repaired in less than two days. In spite of their relatively worse reliability in this survey, Harley-Davidson and BMW owners showed the greatest owner satisfaction, and three-fourths of them said they would buy the same bike again, followed by 72% of Honda owners and 60 to 63% of Kawasaki and Yamaha owners. Dynamics Two-wheeled motorcycles stay upright while rolling due to a physical property known as conservation of angular momentum in the wheels. Angular momentum points along the axle, and it "wants" to stay pointing in that direction. Different types of motorcycles have different dynamics and these play a role in how a motorcycle performs in given conditions. For example, one with a longer wheelbase provides the feeling of more stability by responding less to disturbances. Motorcycle tyres have a large influence over handling. Motorcycles must be leaned in order to make turns. This lean is induced by the method known as countersteering, in which the rider momentarily steers the handlebars in the direction opposite of the desired turn. This practice is counterintuitive and therefore often confusing to novices and even many experienced motorcyclists. With such short wheelbase, motorcycles can generate enough torque at the rear wheel, and enough stopping force at the front wheel, to lift the opposite wheel off the road. These actions, if performed on purpose, are known as wheelies and stoppies (or endos) respectively. Accessories Various features and accessories may be attached to a motorcycle either as OEM (factory-fitted) or aftermarket. Such accessories are selected by the owner to enhance the motorcycle's appearance, safety, performance, or comfort, and may include anything from mobile electronics to sidecars and trailers. Records The world record for the longest motorcycle jump was set in 2008 by Robbie Maddison with . Since late 2010, the Ack Attack team has held the motorcycle land-speed record at . Safety Motorcycles have a higher rate of fatal accidents than automobiles or trucks and buses. United States Department of Transportation data for 2005 from the Fatality Analysis Reporting System show that for passenger cars, 18.62 fatal crashes occur per 100,000 registered vehicles. For motorcycles this figure is higher at 75.19 per 100,000 registered vehicles four times higher than for cars. The same data shows that 1.56 fatalities occur per 100 million vehicle miles travelled for passenger cars, whereas for motorcycles the figure is 43.47 which is 28 times higher than for cars (37 times more deaths per mile travelled in 2007). Furthermore, for motorcycles the accident rates have increased significantly since the end of the 1990s, while the rates have dropped for passenger cars. The most common configuration of motorcycle accidents in the United States is when a motorist pulls out or turns in front of a motorcyclist, violating their right-of-way. This is sometimes called a , an acronym formed from the motorists' common response of "Sorry mate, I didn't see you". Motorcyclists can anticipate and avoid some of these crashes with proper training, increasing their visibility to other traffic, keeping to the speed limits, and not consuming alcohol or other drugs before riding. The United Kingdom has several organisations dedicated to improving motorcycle safety by providing advanced rider training beyond what is necessary to pass the basic motorcycle licence test. These include the Institute of Advanced Motorists (IAM) and the Royal Society for the Prevention of Accidents (RoSPA). Along with increased personal safety, riders with these advanced qualifications may benefit from reduced insurance costs In South Africa, the Think Bike campaign is dedicated to increasing both motorcycle safety and the awareness of motorcycles on the country's roads. The campaign, while strongest in the Gauteng province, has representation in Western Cape, KwaZulu Natal and the Free State. It has dozens of trained marshals available for various events such as cycle races and is deeply involved in numerous other projects such as the annual Motorcycle Toy Run. Motorcycle safety education is offered throughout the United States by organisations ranging from state agencies to non-profit organisations to corporations. Most states use the courses designed by the Motorcycle Safety Foundation (MSF), while Oregon and Idaho developed their own. All of the training programs include a Basic Rider Course, an Intermediate Rider Course and an Advanced Rider Course. In Ireland, since 2010, in the UK and some Australian jurisdictions, such as Victoria, New South Wales, the Australian Capital Territory, Tasmania and the Northern Territory, it is compulsory to complete a basic rider training course before being issued a Learners Licence, after which they can ride on public roads. In Canada, motorcycle rider training is compulsory in Quebec and Manitoba only, but all provinces and territories have graduated licence programs which place restrictions on new drivers until they have gained experience. Eligibility for a full motorcycle licence or endorsement for completing a Motorcycle Safety course varies by province. Without the Motorcycle Safety Course the chance of getting insurance for the motorcycle is very low. The Canada Safety Council, a non-profit safety organisation, offers the Gearing Up program across Canada and is endorsed by the Motorcycle and Moped Industry Council. Training course graduates may qualify for reduced insurance premiums. Motorcyclists and motor scooter riders are also exposed to an increased risk of suffering hearing damage such as hearing loss and tinnitus (ringing ears). The noise is caused by wind noise while riding, rolling noise from the tyres and the engine itself. The helmet only provides insufficient protection against high sound pressure levels. Medicine (as of 2024) is not able to cure hearing damage. Wearing hearing protection, such as special earplugs for motorcyclists, can help prevent hearing damage. Motorcycle rider postures The motorcyclist's riding position depends on rider body-geometry (anthropometry) combined with the geometry of the motorcycle itself. These factors create a set of three basic postures. Sport the rider leans forward into the wind and the weight of the upper torso is supported by the rider's core at low speed and air pressure at high speed. The footpegs are below the rider or to the rear. The reduced frontal area cuts wind resistance and allows higher speeds. At low-speed in this position the rider's arms may bear some of the weight of the rider's torso, which can be problematic. Standard the rider sits upright or leans forward slightly. The feet are below the rider. These are motorcycles that are not specialised to one task, so they do not excel in any particular area. The standard posture is used with touring and commuting as well as dirt and dual-sport bikes, and may offer advantages for beginners. Cruiser the rider sits at a lower seat height with the upper torso upright or leaning slightly rearward. Legs are extended forwards, sometimes out of reach of the regular controls on cruiser pegs. The low seat height can be a consideration for new or short riders. Handlebars tend to be high and wide. The emphasis is on comfort while compromising cornering ability because of low ground clearance and the greater likelihood of scraping foot pegs, floor boards, or other parts if turns are taken at the speeds other motorcycles can more readily accomplish. Factors of a motorcycle's ergonomic geometry that determine the seating posture include the height, angle and location of footpegs, seat and handlebars. Factors in a rider's physical geometry that contribute to seating posture include torso, arm, thigh and leg length, and overall rider height. Legal definitions and restrictions A motorcycle is broadly defined by law in most countries for the purposes of registration, taxation and rider licensing as a powered two-wheel motor vehicle. Most countries distinguish between mopeds of 49 cc and the more powerful, larger vehicles, including scooter type motorcycles. Many jurisdictions include some forms of three-wheeled cars as motorcycles. In Nigeria, motorcycles, popularly referred to as Okada have been subject of many controversies with regards to safety and security followed by restriction of movement in many states. In 2020, it was banned in Lagos, Nigeria's most populous city. Environmental impact Motorcycles and scooters' low fuel consumption has attracted interest in the United States from environmentalists and those affected by increased fuel prices. Piaggio Group Americas supported this interest with the launch of a "Vespanomics" website and platform, claiming lower per-mile carbon emissions of 0.4 lb/mile (113 g/km) less than the average car, a 65% reduction, and better fuel economy. However, a motorcycle's exhaust emissions may contain 10–20 times more oxides of nitrogen (NOx), carbon monoxide, and unburned hydrocarbons than exhaust from a similar-year passenger car or SUV. This is because many motorcycles lack a catalytic converter, and the emission standard is much more permissive for motorcycles than for other vehicles. While catalytic converters have been installed in most gasoline-powered cars and trucks since 1975 in the United States, they can present fitment and heat difficulties in motorcycle applications. United States Environmental Protection Agency 2007 certification result reports for all vehicles versus on highway motorcycles (which also includes scooters), the average certified emissions level for 12,327 vehicles tested was 0.734. The average "Nox+Co End-Of-Useful-Life-Emissions" for 3,863 motorcycles tested was 0.8531. 54% of the tested 2007-model motorcycles were equipped with a catalytic converter. United States emissions limits The following table shows maximum acceptable legal emissions of the combination of hydrocarbons, oxides of nitrogen, and carbon monoxide for new motorcycles sold in the United States with 280 cc or greater piston displacement. The maximum acceptable legal emissions of hydrocarbon and carbon monoxide for new Class I and II motorcycles (50 cc–169 cc and 170 cc–279 cc respectively) sold in the United States are as follows: Europe European emission standards for motorcycles are similar to those for cars. New motorcycles must meet Euro 5 standards, while cars must meet Euro 6D-temp standards. Motorcycle emission controls are being updated and it has been proposed to update to Euro 5+ in 2024. Vietnam According to the National Environmental Status Report 2016 and recent air quality reports, emissions from motor vehicles have been identified as the main cause of environmental pollution. Among them, with over 68 million vehicles in operation nationwide (statistics from the Ministry of Transport, 2021), motorcycles are the largest source of pollutant emissions. In Hanoi, there are over 6 million motorcycles, of which nearly 3 million were manufactured before 2000. In Ho Chi Minh City, there are about 7.8 million motorcycles, of which 67.89% are over 10 years old. Air quality index (AQI) in urban centers often spikes during peak traffic times, such as rush hour in the morning and evening. A study by the Institute of Environment and Resources, Vietnam National University, Ho Chi Minh City, found that motorcycles account for about 29% of NO emissions, 90% of CO emissions, 65.4% of NMVOC emissions, 37.7% of particulate matter emissions, and 31% of fine particulate matter emissions. Traffic emissions account for 50% of total emissions in Ho Chi Minh City. While the world is moving towards Euro 6 emission standards, most cars in Vietnam meet Euro 4 or Euro 5 standards. However, motorcycles still meet Euro 2 or Euro 3 standards, which were implemented over 25 years ago.
Technology
Road transport
null
19883
https://en.wikipedia.org/wiki/Mineralogy
Mineralogy
Mineralogy is a subject of geology specializing in the scientific study of the chemistry, crystal structure, and physical (including optical) properties of minerals and mineralized artifacts. Specific studies within mineralogy include the processes of mineral origin and formation, classification of minerals, their geographical distribution, as well as their utilization. History Early writing on mineralogy, especially on gemstones, comes from ancient Babylonia, the ancient Greco-Roman world, ancient and medieval China, and Sanskrit texts from ancient India and the ancient Islamic world. Books on the subject included the Natural History of Pliny the Elder, which not only described many different minerals but also explained many of their properties, and Kitab al Jawahir (Book of Precious Stones) by Persian scientist Al-Biruni. The German Renaissance specialist Georgius Agricola wrote works such as De re metallica (On Metals, 1556) and De Natura Fossilium (On the Nature of Rocks, 1546) which began the scientific approach to the subject. Systematic scientific studies of minerals and rocks developed in post-Renaissance Europe. The modern study of mineralogy was founded on the principles of crystallography (the origins of geometric crystallography, itself, can be traced back to the mineralogy practiced in the eighteenth and nineteenth centuries) and to the microscopic study of rock sections with the invention of the microscope in the 17th century. Nicholas Steno first observed the law of constancy of interfacial angles (also known as the first law of crystallography) in quartz crystals in 1669. This was later generalized and established experimentally by Jean-Baptiste L. Romé de l'Islee in 1783. René Just Haüy, the "father of modern crystallography", showed that crystals are periodic and established that the orientations of crystal faces can be expressed in terms of rational numbers (law of rational indices), as later encoded in the Miller indices. In 1814, Jöns Jacob Berzelius introduced a classification of minerals based on their chemistry rather than their crystal structure. William Nicol developed the Nicol prism, which polarizes light, in 1827–1828 while studying fossilized wood; Henry Clifton Sorby showed that thin sections of minerals could be identified by their optical properties using a polarizing microscope. James D. Dana published his first edition of A System of Mineralogy in 1837, and in a later edition introduced a chemical classification that is still the standard. X-ray diffraction was demonstrated by Max von Laue in 1912, and developed into a tool for analyzing the crystal structure of minerals by the father/son team of William Henry Bragg and William Lawrence Bragg. More recently, driven by advances in experimental technique (such as neutron diffraction) and available computational power, the latter of which has enabled extremely accurate atomic-scale simulations of the behaviour of crystals, the science has branched out to consider more general problems in the fields of inorganic chemistry and solid-state physics. It, however, retains a focus on the crystal structures commonly encountered in rock-forming minerals (such as the perovskites, clay minerals and framework silicates). In particular, the field has made great advances in the understanding of the relationship between the atomic-scale structure of minerals and their function; in nature, prominent examples would be accurate measurement and prediction of the elastic properties of minerals, which has led to new insight into seismological behaviour of rocks and depth-related discontinuities in seismograms of the Earth's mantle. To this end, in their focus on the connection between atomic-scale phenomena and macroscopic properties, the mineral sciences (as they are now commonly known) display perhaps more of an overlap with materials science than any other discipline. Physical properties An initial step in identifying a mineral is to examine its physical properties, many of which can be measured on a hand sample. These can be classified into density (often given as specific gravity); measures of mechanical cohesion (hardness, tenacity, cleavage, fracture, parting); macroscopic visual properties (luster, color, streak, luminescence, diaphaneity); magnetic and electric properties; radioactivity and solubility in hydrogen chloride (). Hardness is determined by comparison with other minerals. In the Mohs scale, a standard set of minerals are numbered in order of increasing hardness from 1 (talc) to 10 (diamond). A harder mineral will scratch a softer, so an unknown mineral can be placed in this scale, by which minerals; it scratches and which scratch it. A few minerals such as calcite and kyanite have a hardness that depends significantly on direction. Hardness can also be measured on an absolute scale using a sclerometer; compared to the absolute scale, the Mohs scale is nonlinear. Tenacity refers to the way a mineral behaves, when it is broken, crushed, bent or torn. A mineral can be brittle, malleable, sectile, ductile, flexible or elastic. An important influence on tenacity is the type of chemical bond (e.g., ionic or metallic). Of the other measures of mechanical cohesion, cleavage is the tendency to break along certain crystallographic planes. It is described by the quality (e.g., perfect or fair) and the orientation of the plane in crystallographic nomenclature. Parting is the tendency to break along planes of weakness due to pressure, twinning or exsolution. Where these two kinds of break do not occur, fracture is a less orderly form that may be conchoidal (having smooth curves resembling the interior of a shell), fibrous, splintery, hackly (jagged with sharp edges), or uneven. If the mineral is well crystallized, it will also have a distinctive crystal habit (for example, hexagonal, columnar, botryoidal) that reflects the crystal structure or internal arrangement of atoms. It is also affected by crystal defects and twinning. Many crystals are polymorphic, having more than one possible crystal structure depending on factors such as pressure and temperature. Crystal structure The crystal structure is the arrangement of atoms in a crystal. It is represented by a lattice of points which repeats a basic pattern, called a unit cell, in three dimensions. The lattice can be characterized by its symmetries and by the dimensions of the unit cell. These dimensions are represented by three Miller indices. The lattice remains unchanged by certain symmetry operations about any given point in the lattice: reflection, rotation, inversion, and rotary inversion, a combination of rotation and reflection. Together, they make up a mathematical object called a crystallographic point group or crystal class. There are 32 possible crystal classes. In addition, there are operations that displace all the points: translation, screw axis, and glide plane. In combination with the point symmetries, they form 230 possible space groups. Most geology departments have X-ray powder diffraction equipment to analyze the crystal structures of minerals. X-rays have wavelengths that are the same order of magnitude as the distances between atoms. Diffraction, the constructive and destructive interference between waves scattered at different atoms, leads to distinctive patterns of high and low intensity that depend on the geometry of the crystal. In a sample that is ground to a powder, the X-rays sample a random distribution of all crystal orientations. Powder diffraction can distinguish between minerals that may appear the same in a hand sample, for example quartz and its polymorphs tridymite and cristobalite. Isomorphous minerals of different compositions have similar powder diffraction patterns, the main difference being in spacing and intensity of lines. For example, the (halite) crystal structure is space group Fm3m; this structure is shared by sylvite (), periclase (), bunsenite (), galena (), alabandite (), chlorargyrite (), and osbornite (). Chemical elements A few minerals are chemical elements, including sulfur, copper, silver, and gold, but the vast majority are compounds. The classical method for identifying composition is wet chemical analysis, which involves dissolving a mineral in an acid such as hydrochloric acid (HCl). The elements in solution are then identified using colorimetry, volumetric analysis or gravimetric analysis. Since 1960, most chemistry analysis is done using instruments. One of these, atomic absorption spectroscopy, is similar to wet chemistry in that the sample must still be dissolved, but it is much faster and cheaper. The solution is vaporized and its absorption spectrum is measured in the visible and ultraviolet range. Other techniques are X-ray fluorescence, electron microprobe analysis atom probe tomography and optical emission spectrography. Optical In addition to macroscopic properties such as colour or lustre, minerals have properties that require a polarizing microscope to observe. Transmitted light When light passes from air or a vacuum into a transparent crystal, some of it is reflected at the surface and some refracted. The latter is a bending of the light path that occurs because the speed of light changes as it goes into the crystal; Snell's law relates the bending angle to the Refractive index, the ratio of speed in a vacuum to speed in the crystal. Crystals whose point symmetry group falls in the cubic system are isotropic: the index does not depend on direction. All other crystals are anisotropic: light passing through them is broken up into two plane polarized rays that travel at different speeds and refract at different angles. A polarizing microscope is similar to an ordinary microscope, but it has two plane-polarized filters, a (polarizer) below the sample and an analyzer above it, polarized perpendicular to each other. Light passes successively through the polarizer, the sample and the analyzer. If there is no sample, the analyzer blocks all the light from the polarizer. However, an anisotropic sample will generally change the polarization so some of the light can pass through. Thin sections and powders can be used as samples. When an isotropic crystal is viewed, it appears dark because it does not change the polarization of the light. However, when it is immersed in a calibrated liquid with a lower index of refraction and the microscope is thrown out of focus, a bright line called a Becke line appears around the perimeter of the crystal. By observing the presence or absence of such lines in liquids with different indices, the index of the crystal can be estimated, usually to within . Systematic Systematic mineralogy is the identification and classification of minerals by their properties. Historically, mineralogy was heavily concerned with taxonomy of the rock-forming minerals. In 1959, the International Mineralogical Association formed the Commission of New Minerals and Mineral Names to rationalize the nomenclature and regulate the introduction of new names. In July 2006, it was merged with the Commission on Classification of Minerals to form the Commission on New Minerals, Nomenclature, and Classification. There are over 6,000 named and unnamed minerals, and about 100 are discovered each year. The Manual of Mineralogy places minerals in the following classes: native elements, sulfides, sulfosalts, oxides and hydroxides, halides, carbonates, nitrates and borates, sulfates, chromates, molybdates and tungstates, phosphates, arsenates and vanadates, and silicates. Formation environments The environments of mineral formation and growth are highly varied, ranging from slow crystallization at the high temperatures and pressures of igneous melts deep within the Earth's crust to the low temperature precipitation from a saline brine at the Earth's surface. Various possible methods of formation include: sublimation from volcanic gases deposition from aqueous solutions and hydrothermal brines crystallization from an igneous magma or lava recrystallization due to metamorphic processes and metasomatism crystallization during diagenesis of sediments formation by oxidation and weathering of rocks exposed to the atmosphere or within the soil environment. Biomineralogy Biomineralogy is a cross-over field between mineralogy, paleontology and biology. It is the study of how plants and animals stabilize minerals under biological control, and the sequencing of mineral replacement of those minerals after deposition. It uses techniques from chemical mineralogy, especially isotopic studies, to determine such things as growth forms in living plants and animals as well as things like the original mineral content of fossils. A new approach to mineralogy called mineral evolution explores the co-evolution of the geosphere and biosphere, including the role of minerals in the origin of life and processes as mineral-catalyzed organic synthesis and the selective adsorption of organic molecules on mineral surfaces. Mineral ecology In 2011, several researchers began to develop a Mineral Evolution Database. This database integrates the crowd-sourced site Mindat.org, which has over 690,000 mineral-locality pairs, with the official IMA list of approved minerals and age data from geological publications. This database makes it possible to apply statistics to answer new questions, an approach that has been called mineral ecology. One such question is how much of mineral evolution is deterministic and how much the result of chance. Some factors are deterministic, such as the chemical nature of a mineral and conditions for its stability; but mineralogy can also be affected by the processes that determine a planet's composition. In a 2015 paper, Robert Hazen and others analyzed the number of minerals involving each element as a function of its abundance. They found that Earth, with over 4800 known minerals and 72 elements, has a power law relationship. The Moon, with only 63 minerals and 24 elements (based on a much smaller sample) has essentially the same relationship. This implies that, given the chemical composition of the planet, one could predict the more common minerals. However, the distribution has a long tail, with 34% of the minerals having been found at only one or two locations. The model predicts that thousands more mineral species may await discovery or have formed and then been lost to erosion, burial or other processes. This implies a role of chance in the formation of rare minerals occur. In another use of big data sets, network theory was applied to a dataset of carbon minerals, revealing new patterns in their diversity and distribution. The analysis can show which minerals tend to coexist and what conditions (geological, physical, chemical and biological) are associated with them. This information can be used to predict where to look for new deposits and even new mineral species. Uses Minerals are essential to various needs within human society, such as minerals used as ores for essential components of metal products used in various commodities and machinery, essential components to building materials such as limestone, marble, granite, gravel, glass, plaster, cement, etc. Minerals are also used in fertilizers to enrich the growth of agricultural crops. Collecting Mineral collecting is also a recreational study and collection hobby, with clubs and societies representing the field. Museums, such as the Smithsonian National Museum of Natural History Hall of Geology, Gems, and Minerals, the Natural History Museum of Los Angeles County, the Carnegie Museum of Natural History, the Natural History Museum, London, and the private Mim Mineral Museum in Beirut, Lebanon, have popular collections of mineral specimens on permanent display.
Physical sciences
Mineralogy
null
19895
https://en.wikipedia.org/wiki/Molecular%20cloud
Molecular cloud
A molecular cloud, sometimes called a stellar nursery if star formation is occurring within, is a type of interstellar cloud of which the density and size permit absorption nebulae, the formation of molecules (most commonly molecular hydrogen, H2), and the formation of H II regions. This is in contrast to other areas of the interstellar medium that contain predominantly ionized gas. Molecular hydrogen is difficult to detect by infrared and radio observations, so the molecule most often used to determine the presence of H2 is carbon monoxide (CO). The ratio between CO luminosity and H2 mass is thought to be constant, although there are reasons to doubt this assumption in observations of some other galaxies. Within molecular clouds are regions with higher density, where much dust and many gas cores reside, called clumps. These clumps are the beginning of star formation if gravitational forces are sufficient to cause the dust and gas to collapse. Research and discovery The history pertaining to the discovery of molecular clouds is closely related to the development of radio astronomy and astrochemistry. During World War II, at a small gathering of scientists, Henk van de Hulst first reported he had calculated the neutral hydrogen atom should transmit a detectable radio signal. This discovery was an important step towards the research that would eventually lead to the detection of molecular clouds. Once the war ended, and aware of the pioneering radio astronomical observations performed by Jansky and Reber in the US, the Dutch astronomers repurposed the dish-shaped antennas running along the Dutch coastline that were once used by the Germans as a warning radar system and modified into radio telescopes, initiating the search for the hydrogen signature in the depths of space. The neutral hydrogen atom consists of a proton with an electron in its orbit. Both the proton and the electron have a spin property. When the spin state flips from a parallel condition to antiparallel, which contains less energy, the atom gets rid of the excess energy by radiating a spectral line at a frequency of 1420.405 MHz. This frequency is generally known as the 21 cm line, referring to its wavelength in the radio band. The 21 cm line is the signature of HI and makes the gas detectable to astronomers back on earth. The discovery of the 21 cm line was the first step towards the technology that would allow astronomers to detect compounds and molecules in interstellar space. In 1951, two research groups nearly simultaneously discovered radio emission from interstellar neutral hydrogen. Ewen and Purcell reported the detection of the 21-cm line in March, 1951. Using the radio telescope at the Kootwijk Observatory, Muller and Oort reported the detection of the hydrogen emission line in May of that same year. Once the 21-cm emission line was detected, radio astronomers began mapping the neutral hydrogen distribution of the Milky Way Galaxy. Van de Hulst, Muller, and Oort, aided by a team of astronomers from Australia, published the Leiden-Sydney map of neutral hydrogen in the galactic disk in 1958 on the Monthly Notices of the Royal Astronomical Society. This was the first neutral hydrogen map of the galactic disc and also the first map showing the spiral arm structure within it. Following the work on atomic hydrogen detection by van de Hulst, Oort and others, astronomers began to regularly use radio telescopes, this time looking for interstellar molecules. In 1963 Alan Barrett and Sander Weinred at MIT found the emission line of OH in the supernova remnant Cassiopeia A. This was the first radio detection of an interstellar molecule at radio wavelengths. More interstellar OH detections quickly followed and in 1965, Harold Weaver and his team of radio astronomers at Berkeley, identified OH emissions lines coming from the direction of the Orion Nebula and in the constellation of Cassiopeia. In 1968, Cheung, Rank, Townes, Thornton and Welch detected NH₃ inversion line radiation in interstellar space. A year later, Lewis Snyder and his colleagues found interstellar formaldehyde. Also in the same year George Carruthers managed to identify molecular hydrogen. The numerous detections of molecules in interstellar space would help pave the way to the discovery of molecular clouds in 1970. Hydrogen is the most abundant species of atom in molecular clouds, and under the right conditions it will form the H2 molecule. Despite its abundance, the detection of H2 proved difficult. Due to its symmetrical molecule, H2 molecules have a weak rotational and vibrational modes, making it virtually invisible to direct observation. The solution to this problem came when Arno Penzias, Keith Jefferts, and Robert Wilson identified CO in the star-forming region in the Omega Nebula. Carbon monoxide is a lot easier to detect than H2 because of its rotational energy and asymmetrical structure. CO soon became the primary tracer of the clouds where star-formation occurs. In 1970, Penzias and his team quickly detected CO in other locations close to the galactic center, including the giant molecular cloud identified as Sagittarius B2, 390 light years from the galactic center, making it the first detection of a molecular cloud in history. This team later would receive the Nobel prize of physics for their discovery of microwave emission from the Big Bang. Due to their pivotal role, research about these structures have only increased over time. A paper published in 2022 reports over 10,000 molecular clouds detected since the discovery of Sagittarius B2. Occurrence Within the Milky Way, molecular gas clouds account for less than one percent of the volume of the interstellar medium (ISM), yet it is also the densest part of it. The bulk of the molecular gas is contained in a ring between from the center of the Milky Way (the Sun is about 8.5 kiloparsecs from the center). Large scale CO maps of the galaxy show that the position of this gas correlates with the spiral arms of the galaxy. That molecular gas occurs predominantly in the spiral arms suggests that molecular clouds must form and dissociate on a timescale shorter than 10 million years—the time it takes for material to pass through the arm region. Perpendicularly to the plane of the galaxy, the molecular gas inhabits the narrow midplane of the galactic disc with a characteristic scale height, Z, of approximately 50 to 75 parsecs, much thinner than the warm atomic (Z from 130 to 400 parsecs) and warm ionized (Z around 1000 parsecs) gaseous components of the ISM. The exceptions to the ionized-gas distribution are H II regions, which are bubbles of hot ionized gas created in molecular clouds by the intense radiation given off by young massive stars; and as such they have approximately the same vertical distribution as the molecular gas. This distribution of molecular gas is averaged out over large distances; however, the small scale distribution of the gas is highly irregular, with most of it concentrated in discrete clouds and cloud complexes. General structure and chemistry of molecular clouds Molecular clouds typically have interstellar medium densities of 10 to 30 , and constitute approximately 50% of the total interstellar gas in a galaxy. Most of the gas is found in a molecular state. The visual boundaries of a molecular cloud is not where the cloud effectively ends, but where molecular gas changes to atomic gas in a fast transition, forming "envelopes" of mass, giving the impression of an edge to the cloud structure. The structure itself is generally irregular and filamentary. Cosmic dust and ultraviolet radiation emitted by stars are key factors that determine not only gas and column density, but also the molecular composition of a cloud. The dust provides shielding to the molecular gas inside, preventing dissociation by the ultraviolet radiation. The dissociation caused by UV photons is the main mechanism for transforming molecular material back to the atomic state inside the cloud. Molecular content in a region of a molecular cloud can change rapidly due to variation in the radiation field and dust movement and disturbance. Most of the gas constituting a molecular cloud is molecular hydrogen, with carbon monoxide being the second most common compound. Molecular clouds also usually contain other elements and compounds. Astronomers have observed the presence of long chain compounds such as methanol, ethanol and benzene rings and their several hydrides. Large molecules known as polycyclic aromatic hydrocarbons have also been detected. The density across a molecular cloud is fragmented and its regions can be generally categorized in clumps and cores. Clumps form the larger substructure of the cloud, having the average size of 1 pc. Clumps are the precursors of star clusters, though not every clump will eventually form stars. Cores are much smaller (by a factor of 10) and have higher densities. Cores are gravitationally bound and go through a collapse during star formation. In astronomical terms, molecular clouds are short-lived structures that are either destroyed or go through major structural and chemical changes approximately 10 million years into their existence. Their short life span can be inferred from the range in age of young stars associated with them, of 10 to 20 million years, matching molecular clouds’ internal timescales. Direct observation of T Tauri stars inside dark clouds and OB stars in star-forming regions match this predicted age span. The fact OB stars older than 10 million years don’t have a significant amount of cloud material about them, seems to suggest most of the cloud is dispersed after this time. The lack of large amounts of frozen molecules inside the clouds also suggest a short-lived structure. Some astronomers propose the molecules never froze in very large quantities due to turbulence and the fast transition between atomic and molecular gas. Cloud formation and destruction Due to their short lifespan, it follows that molecular clouds are constantly being assembled and destroyed. By calculating the rate at which stars are forming in our galaxy, astronomers are able to suggest the amount of interstellar gas being collected into star-forming molecular clouds in our galaxy. The rate of mass being assembled into stars is approximately 3 M☉ per year. Only 2% of the mass of a molecular cloud is assembled into stars, giving the number of 150 M☉ of gas being assembled in molecular clouds in the Milky Way per year. Two possible mechanisms for molecular cloud formation have been suggested by astronomers. Cloud growth by collision and gravitational instability in the gas layer spread throughout the galaxy. Models for the collision theory have shown it cannot be the main mechanism for cloud formation due to the very long timescale it would take to form a molecular cloud, beyond the average lifespan of such structures. Gravitational instability is likely to be the main mechanism. Those regions with more gas will exert a greater gravitational force on their neighboring regions, and draw surrounding material. This extra material increases the density, increasing their gravitational attraction. Mathematical models of gravitational instability in the gas layer predict a formation time within the timescale for the estimated cloud formation time. Once a molecular cloud assembles enough mass, the densest regions of the structure will start to collapse under gravity, creating star-forming clusters. This process is highly destructive to the cloud itself. Once stars are formed, they begin to ionize portions of the cloud around it due to their heat. The ionized gas then evaporates and is dispersed in formations called ‘champagne flows’. This process begins when approximately 2% of the mass of the cloud has been converted into stars. Stellar winds are also known to contribute to cloud dispersal. The cycle of cloud formation and destruction is closed when the gas dispersed by stars cools again and is pulled into new clouds by gravitational instability. Star Formation Star formation involves the collapse of the densest part of the molecular cloud, fragmenting the collapsed region in smaller clumps. These clumps aggregate more interstellar material, increasing in density by gravitational contraction. This process continues until the temperature reaches a point where the fusion of hydrogen can occur. The burning of hydrogen then generates enough heat to push against gravity, creating hydrostatic equilibrium. At this stage, a protostar is formed and it will continue to aggregate gas and dust from the cloud around it. One of the most studied star formation regions is the Taurus molecular cloud due to its close proximity to earth (140 pc or 430 ly away), making it an excellent object to collect data about the relationship between molecular clouds and star formation. Embedded in the Taurus molecular cloud there are T Tauri stars. These are a class of variable stars in an early stage of stellar development and still gathering gas and dust from the cloud around them. Observation of star forming regions have helped astronomers develop theories about stellar evolution. Many O and B type stars have been observed in or very near molecular clouds. Since these star types belong to population I (some are less than 1 million years old), they cannot have moved far from their birth place. Many of these young stars are found embedded in cloud clusters, suggesting stars are formed inside it. Types of molecular cloud Giant molecular clouds A vast assemblage of molecular gas that has more than 10 thousand times the mass of the Sun is called a giant molecular cloud (GMC). GMCs are around 15 to 600 light-years (5 to 200 parsecs) in diameter, with typical masses of 10 thousand to 10 million solar masses. Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average volume density of a GMC is about ten to a thousand times higher. Although the Sun is much denser than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps. Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments, observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width. A substantial fraction of filaments contained prestellar and protostellar cores, supporting the important role of filaments in gravitationally bound core formation. Recent studies have suggested that filamentary structures in molecular clouds play a crucial role in the initial conditions of star formation and the origin of the stellar IMF. The densest parts of the filaments and clumps are called molecular cores, while the densest molecular cores are called dense molecular cores and have densities in excess of 104 to 106 particles per cubic centimeter. Typical molecular cores are traced with CO and dense molecular cores are traced with ammonia. The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae. GMCs are so large that local ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion molecular cloud (OMC) or the Taurus molecular cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt. The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring about the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space. Small molecular clouds Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies. High-latitude diffuse molecular clouds In 1984 IRAS identified a new type of diffuse molecular cloud. These were diffuse filamentary clouds that are visible at high galactic latitudes. These clouds have a typical density of 30 particles per cubic centimetre. List of molecular cloud complexes Sagittarius B2 Serpens-Aquila Rift Rho Ophiuchi cloud complex Corona Australis molecular cloud Musca–Chamaeleonis molecular cloud Vela Molecular Ridge Radcliff wave Orion molecular cloud complex Taurus molecular cloud Perseus molecular cloud
Physical sciences
Basics_3
null
19901
https://en.wikipedia.org/wiki/M16%20rifle
M16 rifle
The M16 rifle (officially designated Rifle, Caliber 5.56 mm, M16) is a family of assault rifles adapted from the ArmaLite AR-15 rifle for the United States military. The original M16 rifle was a 5.56×45mm automatic rifle with a 20-round magazine. In 1964, the XM16E1 entered US military service as the M16 and in the following year was deployed for jungle warfare operations during the Vietnam War. In 1969, the M16A1 replaced the M14 rifle to become the US military's standard service rifle. The M16A1 incorporated numerous modifications including a bolt-assist ("forward-assist"), chrome-plated bore, protective reinforcement around the magazine release, and revised flash hider. In 1983, the US Marine Corps adopted the M16A2 rifle, and the US Army adopted it in 1986. The M16A2 fires the improved 5.56×45mm (M855/SS109) cartridge and has a newer adjustable rear sight, case deflector, heavy barrel, improved handguard, pistol grip, and buttstock, as well as a semi-auto and three-round burst fire selector. Adopted in July 1997, the M16A4 is the fourth generation of the M16 series. It is equipped with a removable carrying handle and quad Picatinny rail for mounting optics and other ancillary devices. The M16 has also been widely adopted by other armed forces around the world. Total worldwide production of M16s is approximately 8 million, making it the most-produced firearm of its 5.56 mm caliber. The US military has largely replaced the M16 in frontline combat units with a shorter and lighter version, the M4 carbine. In April 2022, the U.S. Army selected the SIG MCX SPEAR as the winner of the Next Generation Squad Weapon Program to replace the M16/M4. The new rifle is designated XM7. History Background In 1928, a U.S. Army 'Caliber Board' conducted firing tests at Aberdeen Proving Ground and recommended transitioning to smaller caliber rounds, mentioning, in particular caliber. Largely in deference to tradition, this recommendation was ignored and the Army referred to the caliber as "full-sized" for the next 35 years. After World War II, the United States military started looking for a single automatic rifle to replace the M1 Garand, M1/M2 carbines, M1918 Browning automatic rifle, M3 "Grease Gun" and Thompson submachine gun. However, early experiments with select-fire versions of the M1 Garand proved disappointing. During the Korean War, the select-fire M2 carbine largely replaced the submachine gun in US service and became the most widely used carbine variant. However, combat experience suggested that the .30 carbine round was underpowered. American weapons designers concluded that an intermediate round was necessary, and recommended a small-caliber, high-velocity cartridge. However, senior American commanders, having faced fanatical enemies and experienced major logistical problems during World War II and the Korean War, insisted that a single, powerful .30 caliber cartridge be developed, that could not only be used by the new automatic rifle but by the new general-purpose machine gun (GPMG) in concurrent development. This culminated in the development of the 7.62×51 mm NATO cartridge. The U.S. Army then began testing several rifles to replace the obsolete M1. Springfield Armory's T44E4 and heavier T44E5 were essentially updated versions of the M1 chambered for the new 7.62 mm round, while Fabrique Nationale submitted their FN FAL as the T48. ArmaLite entered the competition late, hurriedly submitting several AR-10 prototype rifles in the fall of 1956 to the U.S. Army's Springfield Armory for testing. The AR-10 featured an innovative straight-line barrel/stock design, forged aluminum alloy receivers, and with phenolic composite stocks. It had rugged elevated sights, an oversized aluminum flash suppressor and recoil compensator, and an adjustable gas system. The final prototype featured an upper and lower receiver with the now-familiar hinge and takedown pins, and the charging handle was on top of the receiver placed inside of the carry handle. For a 7.62 mm NATO rifle, the AR-10 was incredibly lightweight at only empty. Initial comments by Springfield Armory test staff were favorable, and some testers commented that the AR-10 was the best lightweight automatic rifle ever tested by the Armory. In the end, the U.S. Army chose the T44, now named the M14 rifle, which was an improved M1 Garand with a 20-round magazine and automatic fire capability. The U.S. also adopted the M60 general-purpose machine gun (GPMG). Its NATO partners adopted the FN FAL and HK G3 rifles, as well as the FN MAG and Rheinmetall MG3 GPMGs. The first confrontations between the AK-47 and the M14 came in the early part of the Vietnam War. Battlefield reports indicated that the M14 was uncontrollable in full-auto and that soldiers could not carry enough ammunition to maintain fire superiority over the AK-47. And, while the M2 carbine offered a high rate of fire, it was under-powered and ultimately outclassed by the AK-47. A replacement was needed: a medium between the traditional preference for high-powered rifles such as the M14, and the lightweight firepower of the M2 carbine. As a result, the Army was forced to reconsider a 1957 request by General Willard G. Wyman, commander of the U.S. Continental Army Command (CONARC) to develop a .223-inch caliber (5.56 mm) select-fire rifle weighing when loaded with a 20-round magazine. The 5.56 mm round had to penetrate a standard U.S. helmet at and retain a velocity over the speed of sound while matching or exceeding the wounding ability of the .30 carbine cartridge. This request ultimately resulted in the development of a scaled-down version of the Armalite AR-10, named the ArmaLite AR-15. The AR-15 was first revealed by Eugene Stoner at Fort Benning in May 1957. The AR-15 used .22-caliber bullets, which destabilized when they hit a human body, as opposed to the .30 round, which typically passed through in a straight line. The smaller caliber meant that it could be controlled in autofire due to the reduced bolt thrust and free recoil impulse. Being almost one-third the weight of the .30 meant that the soldier could sustain fire for longer with the same load. Due to design innovations, the AR-15 could fire 600 to 700 rounds a minute with an extremely low jamming rate. Parts were stamped out, not hand-machined, so they could be mass-produced, and the stock was plastic to reduce weight. In 1958, the Army's Combat Developments Experimentation Command ran experiments with small squads in combat situations using the M14, AR-15, and Winchester's Light Weight Military Rifle (WLWMR). The resulting study recommended adopting a lightweight rifle like the AR-15. In response, the Army declared that all rifles and machine guns should use the same ammunition and ordered full production of the M14. However, advocates for the AR-15 gained the attention of Air Force Chief of Staff General Curtis LeMay. After testing the AR-15 with the ammunition manufactured by Remington that Armalite and Colt recommended, the Air Force declared that the AR-15 was its 'standard model' and ordered 8,500 rifles and 8.5 million rounds. Advocates for the AR-15 in the Defense Advanced Research Projects Agency acquired 1,000 Air Force AR-15s and shipped them to be tested by the Army of the Republic of Vietnam (ARVN). The South Vietnam soldiers issued glowing reports of the weapon's reliability, recording zero broken parts while firing 80,000 rounds in one stage of testing, and requiring only two replacement parts for the 1,000 weapons over the entire course of testing. The report of the experiment recommended that the U.S. provide the AR-15 as the standard rifle of the ARVN, but Admiral Harry Felt, then Commander in Chief of Pacific Forces, rejected the recommendations on the advice of the U.S. Army. Throughout 1962 and 1963, the U.S. military extensively tested the AR-15. Positive evaluations emphasized its lightness, "lethality", and reliability. However, the Army Materiel Command criticized its inaccuracy at longer ranges and lack of penetrating power at higher ranges. In early 1963, the U.S. Special Forces asked and was given permission, to make the AR-15 its standard weapon. Other users included Army Airborne units in Vietnam and some units affiliated with the Central Intelligence Agency. As more units adopted the AR-15, Secretary of the Army Cyrus Vance ordered an investigation into why the weapon had been rejected by the Army. The resulting report found that Army Materiel Command had rigged the previous tests, selecting tests that would favor the M14 and choosing match grade M14s to compete against AR-15s out of the box. At this point, the bureaucratic battle lines were well-defined, with the Army ordnance agencies opposed to the AR-15 and the Air Force and civilian leadership of the Defense Department in favor. In January 1963, Secretary of Defense Robert McNamara concluded that the AR-15 was the superior weapon system and ordered a halt to M14 production. In late 1963, the Defense Department began mass procurement of rifles for the Air Force and special Army units. Secretary McNamara designated the Army as the procurer for the weapon with the Department, which allowed the Army ordnance establishment to modify the weapon as they wished. The first modification was the addition of a "manual bolt closure", allowing a soldier to ram in a round if it failed to seat properly. The Air Force, which was buying the rifle, and the Marine Corps, which had tested it both objected to this addition, with the Air Force noting, "During three years of testing and operation of the AR-15 rifle under all types of conditions the Air Force has no record of malfunctions that could have been corrected by a manual bolt closing device." They also noted that the closure added weight and complexity, reducing the reliability of the weapon. Colonel Harold Yount, who managed the Army procurement, would later state the bolt closure was added after direction from senior leadership, rather than as a result of any complaint or test result, and testified about the reasons: "the M-1, the M-14, and the carbine had always had something for the soldier to push on; that maybe this would be a comforting feeling to him or something." After modifications, the new redesigned rifle was subsequently adopted as the M16 Rifle: Despite its early failures the M16 proved to be a revolutionary design and stands as the longest continuously serving rifle in US military history. It has been adopted by many US allies and the 5.56×45 mm NATO cartridge has become not only the NATO standard but "the standard assault-rifle cartridge in much of the world." It also led to the development of small-caliber high-velocity service rifles by every major army in the world. It is a benchmark against which other assault rifles are judged. Adoption In July 1960, General Curtis LeMay was impressed by a demonstration of the ArmaLite AR-15. In the summer of 1961, General LeMay was promoted to U.S. Air Force chief of staff and requested 80,000 AR-15s. However, General Maxwell D. Taylor, chairman of the Joint Chiefs of Staff, advised President John F. Kennedy that having two different calibers within the military system at the same time would be problematic and the request was rejected. In October 1961, William Godel, a senior man at the Advanced Research Projects Agency, sent 10 AR-15s to South Vietnam. The reception was enthusiastic, and in 1962 another 1,000 AR-15s were sent. United States Army Special Forces personnel filed battlefield reports lavishly praising the AR-15 and the stopping power of the 5.56 mm cartridge, and pressed for its adoption. The damage caused by the 5.56 mm bullet was originally believed to be caused by "tumbling" due to the slow 1 turn in rifling twist rate. However, any pointed lead core bullet will "tumble" after penetration into flesh, because the center of gravity is towards the rear of the bullet. The large wounds observed by soldiers in Vietnam were caused by bullet fragmentation created by a combination of the bullet's velocity and construction. These wounds were so devastating that the photographs remained classified into the 1980s. However, despite overwhelming evidence that the AR-15 could bring more firepower to bear than the M14, the Army opposed the adoption of the new rifle. U.S. Secretary of Defense Robert McNamara now had two conflicting views: the ARPA report favoring the AR-15 and the Army's position favoring the M14. Even President Kennedy expressed concern, so McNamara ordered Secretary of the Army, Cyrus Vance, to test the M14, the AR-15, and the AK-47. The Army reported that only the M14 was suitable for service, but Vance wondered about the impartiality of those conducting the tests. He ordered the Army Inspector General to investigate the testing methods used; the inspector general confirmed that the testers were biased toward the M14. In January 1963, Secretary McNamara received reports that M14 production was insufficient to meet the needs of the armed forces and ordered a halt to M14 production. At the time, the AR-15 was the only rifle that could fulfill a requirement of a "universal" infantry weapon for issue to all services. McNamara ordered its adoption, despite receiving reports of several deficiencies, most notably the lack of a chrome-plated chamber. After modifications (most notably, the charging handle was re-located from under the carrying handle like the AR-10, to the rear of the receiver), the newly redesigned rifle was renamed the Rifle, Caliber 5.56 mm, M16. Inexplicably, the modification to the new M16 did not include a chrome-plated barrel. Meanwhile, the Army relented and recommended the adoption of the M16 for jungle warfare operations. However, the Army insisted on the inclusion of a forward assist to help push the bolt into battery if a cartridge failed to seat into the chamber. The Air Force, Colt, and Eugene Stoner believed that the addition of a forward assist was an unjustified expense. As a result, the design was split into two variants: the Air Force's M16 without the forward assist, and the XM16E1 with the forward assist for the other service branches. In November 1963, McNamara approved the U.S. Army's order of 85,000 XM16E1s; and to appease General LeMay, the Air Force was granted an order for another 19,000 M16s. In March 1964, the M16 rifle went into production and the Army accepted delivery of the first batch of 2,129 rifles later that year, and an additional 57,240 rifles the following year. In 1964, the Army was informed that DuPont could not mass-produce the IMR 4475 stick powder to the specifications demanded by the M16. Therefore, Olin Mathieson Company provided a high-performance ball propellant. While the Olin WC 846 powder achieved the desired per second muzzle velocity, it produced much more fouling, which quickly jammed the M16's action (unless the rifle was cleaned well and often). In March 1965, the Army began to issue the XM16E1 to infantry units. However, the rifle was initially delivered without adequate cleaning kits or instructions because advertising from Colt asserted that the M16's materials made the weapon require little maintenance, leading to a misconception that it was capable of self-cleaning. Furthermore, cleaning was often conducted with improper equipment, such as insect repellent, water, and aircraft fuel, which induced further wear on the weapon. As a result, reports of stoppages in combat began to surface. The most severe problem was known as "failure to extract"—the spent cartridge case remained lodged in the chamber after the rifle was fired. Documented accounts of dead U.S. troops found next to disassembled rifles eventually led to a Congressional investigation: In February 1967, the improved XM16E1 was standardized as the M16A1. The new rifle had a chrome-plated chamber and bore to eliminate corrosion and stuck cartridges, and other minor modifications. New cleaning kits, powder solvents, and lubricants were also issued. Intensive training programs in weapons cleaning were instituted including a comic book-style operations manual. As a result, reliability problems were largely resolved and the M16A1 rifle achieved widespread acceptance by U.S. troops in Vietnam. In 1969, the M16A1 officially replaced the M14 rifle to become the U.S. military's standard service rifle. In 1970, the new WC 844 powder was introduced to reduce fouling. Colt, H&R, and GM Hydramatic Division manufactured M16A1 rifles during the Vietnam War. M16s were produced by Colt until the late 1980s when FN Herstal (FN USA) began to manufacture them. Reliability During the early part of its service, the M16 had a reputation for poor reliability and a malfunction rate of two per 1000 rounds fired. The M16's action works by passing high-pressure propellant gasses, tapped from the barrel, down a tube and into the carrier group within the upper receiver. The gas goes from the gas tube, through the bolt carrier key, and into the inside of the carrier where it expands in a donut-shaped gas-piston cylinder. Because the bolt is prevented from moving forward by the barrel, the carrier is driven to the rear by the expanding gases and thus converts the energy of the gas to the movement of the rifle's parts. The back part of the bolt forms a piston head and the cavity in the bolt carrier is the piston sleeve. While the M16 is commonly said to use a direct impingement system, this is wrong, and it is instead correct to say it uses an internal piston system. This system is however ammunition specific, since it does not have an adjustable gas port or valve to adjust the weapon to various propellant and projectile or barrel length specific pressure behavior. The M16 operating system designed by Stoner is lighter and more compact than a gas-piston design. However, this design requires that combustion byproducts from the discharged cartridge be blown into the receiver as well. This accumulating carbon and vaporized metal build-up within the receiver and bolt carrier negatively affects reliability and necessitates more intensive maintenance on the part of the individual soldier. The channeling of gasses into the bolt carrier during operation increases the amount of heat that is deposited in the receiver while firing the M16 and causes the essential lubricant to be "burned off". This requires frequent and generous applications of appropriate lubricant. Lack of proper lubrication is the most common source of weapon stoppages or jams. The original M16 fared poorly in the jungles of Vietnam and was infamous for reliability problems in harsh environments. Max Hastings was very critical of the M16's general field issue in Vietnam just as grievous design flaws were becoming apparent. He further states that the Shooting Times experienced repeated malfunctions with a test M16 and assumed these would be corrected before military use, but they were not. Many marines and soldiers were so angry with the reliability problems they began writing home and on 26 March 1967, the Washington Daily News broke the story. Eventually, the M16 became the target of a congressional investigation. The investigation found that: The M16 was issued to troops without cleaning kits or instructions on how to clean the rifle. The M16 and 5.56×45mm cartridge was tested and approved with the use of a DuPont IMR8208M extruded powder, which was switched to Olin Mathieson WC846 ball powder which produced much more fouling, which quickly jammed the action of the M16 (unless the gun was cleaned well and often). The M16 lacked a forward assist (rendering the rifle inoperable when it failed to go fully forward). The M16 lacked a chrome-plated chamber, which allowed corrosion problems and contributed to case-extraction failures (which was considered the most severe problem and required extreme measures to clear, such as inserting the cleaning rod down the barrel and knocking the spent cartridge out). When these issues were addressed and corrected by the M16A1, the reliability problems decreased greatly. According to a 1968 Department of Army report, the M16A1 rifle achieved widespread acceptance by U.S. troops in Vietnam. "Most men armed with the M16 in Vietnam rated this rifle's performance high, however, many men entertained some misgivings about the M16's reliability. When asked what weapon they preferred to carry in combat, 85 percent indicated that they wanted either the M16 or its [smaller] carbine-length version, the XM177E2." Also, "the M14 was preferred by 15 percent, while less than one percent wished to carry either the Stoner rifle, the AK-47, the [M1] carbine or a pistol." In March 1970, the "President's Blue Ribbon Defense Panel" concluded that the issuance of the M16 saved the lives of 20,000 U.S. servicemen during the Vietnam War, who would have otherwise died had the M14 remained in service. However, the M16 rifle's reputation has suffered as of 2011. Another underlying cause of the M16's jamming problem was identified by ordnance staff that discovered that Stoner and ammunition manufacturers had initially tested the AR-15 using DuPont IMR8208M extruded (stick) powder. Later ammunition manufacturers adopted the more readily available Olin Mathieson WC846 ball powder. The ball powder produced a longer peak chamber pressure with undesired timing effects. Upon firing, the cartridge case expands and seals the chamber (obturation). When the peak pressure starts to drop the cartridge case contracts and then can be extracted. With ball powder, the cartridge case was not contracted enough during extraction due to the longer peak pressure period. The ejector would then fail to extract the cartridge case, tearing through the case rim, and leaving an obturated case behind. After the introduction of the M4 carbine, it was found that the shorter barrel length of 14.5 inches also harms the reliability, as the gas port is located closer to the chamber than the gas port of the standard length M16 rifle: 7.5 inches instead of 13 inches. This affects the M4's timing and increases the amount of stress and heat on the critical components, thereby reducing reliability. In a 2002 assessment, the USMC found that the M4 malfunctioned three times more often than the M16A4 (the M4 failed 186 times for 69,000 rounds fired, while the M16A4 failed 61 times). Thereafter, the Army and Colt worked to make modifications to the M4s and M16A4s to address the problems found. In tests conducted in 2005 and 2006 the Army found that on average, the new M4s and M16s fired approximately 5,000 rounds between stoppages. In December 2006, the Center for Naval Analyses (CNA) released a report on U.S. small arms in combat. The CNA conducted surveys on 2,608 troops returning from combat in Iraq and Afghanistan over the past 12 months. Only troops who had fired their weapons at enemy targets were allowed to participate. 1,188 troops were armed with M16A2 or A4 rifles, making up 46 percent of the survey. 75 percent of M16 users (891 troops) reported they were satisfied with the weapon. 60 percent (713 troops) were satisfied with handling qualities such as handguards, size, and weight. Of the 40 percent dissatisfied, most were with its size. Only 19 percent of M16 users (226 troops) reported a stoppage, while 80 percent of those that experienced a stoppage said it had little impact on their ability to clear the stoppage and re-engage their target. Half of the M16 users experienced failures in their magazines to feed. 83 percent (986 troops) did not need their rifles repaired while in the theater. 71 percent (843 troops) were confident in the M16's reliability, defined as a level of soldier confidence their weapon will fire without malfunction, and 72 percent (855 troops) were confident in its durability, defined as a level of soldier confidence their weapon will not break or need repair. Both factors were attributed to high levels of soldiers performing their maintenance. 60 percent of M16 users offered recommendations for improvements. Requests included greater bullet lethality, newly built instead of rebuilt rifles, better-quality magazines, decreased weight, and a collapsible stock. Some users recommended shorter and lighter weapons such as the M4 carbine. Some issues have been addressed with the issuing of the Improved STANAG magazine in March 2009, and the M855A1 Enhanced Performance Round in June 2010. In early 2010, two journalists from The New York Times spent three months with soldiers and Marines in Afghanistan. While there, they questioned around 100 infantry troops about the reliability of their M16 rifles, as well as the M4 carbine. The troops did not report reliability problems with their rifles. While only 100 troops were asked, they engaged in daily fighting in Marja, including at least a dozen intense engagements in Helmand Province, where the ground is covered in fine powdered sand (called "moon dust" by troops) that can stick to firearms. Weapons were often dusty, wet, and covered in mud. Intense firefights lasted hours with several magazines being expended. Only one soldier reported a jam when his M16 was covered in mud after climbing out of a canal. The weapon was cleared and resumed firing with the next chambered round. Furthermore, the Marine Chief Warrant Officer responsible for weapons training and performance of the Third Battalion, Sixth Marines, reported that "We've had nil in the way of problems; we've had no issues", with his battalion's 350 M16s and 700 M4s. Design The M16 is a lightweight, 5.56 mm, air-cooled, gas-operated, magazine-fed assault rifle, with a rotating bolt. The M16's receivers are made of 7075 aluminum alloy, its barrel, bolt, and bolt carrier of steel, and its handguards, pistol grip, and buttstock of plastics. The M16 internal piston action was derived from the original ArmaLite AR-10 and ArmaLite AR-15 actions. This internal piston action system designed by Eugene Stoner is commonly called a direct impingement system, but it does not use a conventional direct impingement system. In , the designer states: ″This invention is a true expanding gas system instead of the conventional impinging gas system.″ The gas system, bolt carrier, and bolt-locking design is ammunition specific, since it does not have an adjustable gas port or valve to adjust the weapon to various propellant and projectile or barrel length specific pressure behavior. The M16A1 was especially lightweight at with a loaded 30-round magazine. This was significantly less than the M14 that it replaced at with a loaded 20-round magazine. It is also lighter when compared to the AKM's with a loaded 30-round magazine. The M16A2 weighs loaded with a 30-round magazine, because of the adoption of a thicker barrel profile. The thicker barrel is more resistant to damage when handled roughly and is also slower to overheat during sustained fire. Unlike a traditional "bull" barrel that is thick its entire length, the M16A2's barrel is only thick forward of the handguards. The barrel profile under the handguards remained the same as the M16A1 for compatibility with the M203 grenade launcher. Barrel Early model M16 barrels had a rifling twist of four grooves, right-hand twist, one turn in 14 inches (1:355.6 mm or 64 calibers) bore—as it was the same rifling as used by the .222 Remington sporting cartridge. After finding out that under unfavorable conditions, military bullets could yaw in flight at long ranges, the rifling was soon altered. Later M16 models and the M16A1 had an improved rifling with six grooves, right-hand twist, one turn in 12 inches (1:304.8 mm or 54.8 calibers) for increased accuracy and was optimized to adequately stabilize the M193 ball and M196 tracer bullets. M16A2 and current models are optimized for firing the heavier NATO SS109 ball and long L110 tracer bullets and have six grooves, right-hand twist, one turn in 7 in (1:177.8 mm or 32 calibers). M193 ball and M196 tracer bullets may be fired in a rifle with a one turn in 7 in (1:177.8 mm or 32 calibers) twist barrel. NATO SS109 ball and L110 tracer bullets should only be used in emergency situations at ranges under with a one turn in 12 inches (1:304.8 mm or 54.8 calibers) twist, as this twist is insufficient to stabilize these projectiles. Weapons designed to adequately stabilize both the M193 or SS109 projectiles (like civilian market clones) usually have a six-groove, right-hand twist, one turn in 9 inches (1:228.6 mm or 41.1 calibers) or one turn in 8 inches (1:203.2 mm or 36.5 calibers) bore, although other and 1:7 inches twist rates are available as well. Recoil The M16 uses a "straight-line" recoil design, where the recoil spring is located in the stock directly behind the action, and serves the dual function of operating spring and recoil buffer. The stock being in line with the bore also reduces muzzle rise, especially during automatic fire. Because recoil does not significantly shift the point of aim, faster follow-up shots are possible and user fatigue is reduced. In addition, current model M16 flash-suppressors also act as compensators to reduce recoil further.
Technology
Specific firearms
null
19904
https://en.wikipedia.org/wiki/Meteorology
Meteorology
Meteorology is a branch of the atmospheric sciences (which include atmospheric chemistry and physics) with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not begin until the 18th century. The 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data. It was not until after the elucidation of the laws of physics, and more particularly in the latter half of the 20th century, the development of the computer (allowing for the automated solution of a great many modelling equations) that significant breakthroughs in weather forecasting were achieved. An important branch of weather forecasting is marine weather forecasting as it relates to maritime and coastal safety, in which weather effects also include atmospheric interactions with large bodies of water. Meteorological phenomena are observable weather events that are explained by the science of meteorology. Meteorological phenomena are described and quantified by the variables of Earth's atmosphere: temperature, air pressure, water vapour, mass flow, and the variations and interactions of these variables, and how they change over time. Different spatial scales are used to describe and predict weather on local, regional, and global levels. Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology. The interactions between Earth's atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in many diverse fields such as the military, energy production, transport, agriculture, and construction. The word meteorology is from the Ancient Greek μετέωρος metéōros (meteor) and -λογία -logia (-(o)logy), meaning "the study of things high in the air". History Ancient meteorology up to the time of Aristotle Early attempts at predicting weather were often related to prophecy and divining, and were sometimes based on astrological ideas. Ancient religions believed meteorological phenomena to be under the control of the gods. The ability to predict rains and floods based on annual cycles was evidently used by humans at least from the time of agricultural settlement if not earlier. Early approaches to predicting weather were based on astrology and were practiced by priests. The Egyptians had rain-making rituals as early as 3500 BC. Ancient Indian Upanishads contain mentions of clouds and seasons. The Samaveda mentions sacrifices to be performed when certain phenomena were noticed. Varāhamihira's classical work Brihatsamhita, written about 500 AD, provides evidence of weather observation. Cuneiform inscriptions on Babylonian tablets included associations between thunder and rain. The Chaldeans differentiated the 22° and 46° halos. The ancient Greeks were the first to make theories about the weather. Many natural philosophers studied the weather. However, as meteorological instruments did not exist, the inquiry was largely qualitative, and could only be judged by more general theoretical speculations. Herodotus states that Thales predicted the solar eclipse of 585 BC. He studied Babylonian equinox tables. According to Seneca, he gave the explanation that the cause of the Nile's annual floods was due to northerly winds hindering its descent by the sea. Anaximander and Anaximenes thought that thunder and lightning was caused by air smashing against the cloud, thus kindling the flame. Early meteorological theories generally considered that there was a fire-like substance in the atmosphere. Anaximander defined wind as a flowing of air, but this was not generally accepted for centuries. A theory to explain summer hail was first proposed by Anaxagoras. He observed that air temperature decreased with increasing height and that clouds contain moisture. He also noted that heat caused objects to rise, and therefore the heat on a summer day would drive clouds to an altitude where the moisture would freeze. Empedocles theorized on the change of the seasons. He believed that fire and water opposed each other in the atmosphere, and when fire gained the upper hand, the result was summer, and when water did, it was winter. Democritus also wrote about the flooding of the Nile. He said that during the summer solstice, snow in northern parts of the world melted. This would cause vapors to form clouds, which would cause storms when driven to the Nile by northerly winds, thus filling the lakes and the Nile. Hippocrates inquired into the effect of weather on health. Eudoxus claimed that bad weather followed four-year periods, according to Pliny. Aristotelian meteorology These early observations would form the basis for Aristotle's Meteorology, written in 350 BC. Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle. His work would remain an authority on meteorology for nearly 2,000 years. The book De Mundo (composed before 250 BC or between 350 and 200 BC) noted: If the flashing body is set on fire and rushes violently to the Earth it is called a thunderbolt; if it is only half of fire, but violent also and massive, it is called a meteor; if it is entirely free from fire, it is called a smoking bolt. They are all called 'swooping bolts' because they swoop down upon the Earth. Lightning is sometimes smoky and is then called 'smoldering lightning"; sometimes it darts quickly along and is then said to be vivid. At other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called 'swooping lightning' After Aristotle, progress in meteorology stalled for a long time. Theophrastus compiled a book on weather forecasting, called the Book of Signs, as well as On Winds. He gave hundreds of signs for weather phenomena for a period up to a year. His system was based on dividing the year by the setting and the rising of the Pleiad, halves into solstices and equinoxes, and the continuity of the weather for those periods. He also divided months into the new moon, fourth day, eighth day and full moon, in likelihood of a change in the weather occurring. The day was divided into sunrise, mid-morning, noon, mid-afternoon and sunset, with corresponding divisions of the night, with change being likely at one of these divisions. Applying the divisions and a principle of balance in the yearly weather, he came up with forecasts like that if a lot of rain falls in the winter, the spring is usually dry. Rules based on actions of animals are also present in his work, like that if a dog rolls on the ground, it is a sign of a storm. Shooting stars and the Moon were also considered significant. However, he made no attempt to explain these phenomena, referring only to the Aristotelian method. The work of Theophrastus remained a dominant influence in weather forecasting for nearly 2,000 years. Meteorology after Aristotle Meteorology continued to be studied and developed over the centuries, but it was not until the Renaissance in the 14th to 17th centuries that significant advancements were made in the field. Scientists such as Galileo and Descartes introduced new methods and ideas, leading to the scientific revolution in meteorology. Speculation on the cause of the flooding of the Nile ended when Eratosthenes, according to Proclus, stated that it was known that man had gone to the sources of the Nile and observed the rains, although interest in its implications continued. During the era of Roman Greece and Europe, scientific interest in meteorology waned. In the 1st century BC, most natural philosophers claimed that the clouds and winds extended up to 111 miles, but Posidonius thought that they reached up to five miles, after which the air is clear, liquid and luminous. He closely followed Aristotle's theories. By the end of the second century BC, the center of science shifted from Athens to Alexandria, home to the ancient Library of Alexandria. In the 2nd century AD, Ptolemy's Almagest dealt with meteorology, because it was considered a subset of astronomy. He gave several astrological weather predictions. He constructed a map of the world divided into climatic zones by their illumination, in which the length of the Summer solstice increased by half an hour per zone between the equator and the Arctic. Ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. In 25 AD, Pomponius Mela, a Roman geographer, formalized the climatic zone system. In 63–64 AD, Seneca wrote Naturales quaestiones. It was a compilation and synthesis of ancient Greek theories. However, theology was of foremost importance to Seneca, and he believed that phenomena such as lightning were tied to fate. The second book(chapter) of Pliny's Natural History covers meteorology. He states that more than twenty ancient Greek authors studied meteorology. He did not make any personal contributions, and the value of his work is in preserving earlier speculation, much like Seneca's work. From 400 to 1100, scientific learning in Europe was preserved by the clergy. Isidore of Seville devoted a considerable attention to meteorology in Etymologiae, De ordine creaturum and De natura rerum. Bede the Venerable was the first Englishman to write about the weather in De Natura Rerum in 703. The work was a summary of then extant classical sources. However, Aristotle's works were largely lost until the twelfth century, including Meteorologica. Isidore and Bede were scientifically minded, but they adhered to the letter of Scripture. Islamic civilization translated many ancient works into Arabic which were transmitted and translated in western Europe to Latin. In the 9th century, Al-Dinawari wrote the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Arab Agricultural Revolution. He describes the meteorological character of the sky, the planets and constellations, the sun and moon, the lunar phases indicating seasons and rain, the anwa (heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes. In 1021, Alhazen showed that atmospheric refraction is also responsible for twilight in Opticae thesaurus; he estimated that twilight begins when the sun is 19 degrees below the horizon, and also used a geometric determination based on this to estimate the maximum possible height of the Earth's atmosphere as 52,000 passim (about 49 miles, or 79 km). Adelard of Bath was one of the early translators of the classics. He also discussed meteorological topics in his Quaestiones naturales. He thought dense air produced propulsion in the form of wind. He explained thunder by saying that it was due to ice colliding in clouds, and in Summer it melted. In the thirteenth century, Aristotelian theories reestablished dominance in meteorology. For the next four centuries, meteorological work by and large was mostly commentary. It has been estimated over 156 commentaries on the Meteorologica were written before 1650. Experimental evidence was less important than appeal to the classics and authority in medieval thought. In the thirteenth century, Roger Bacon advocated experimentation and the mathematical approach. In his Opus majus, he followed Aristotle's theory on the atmosphere being composed of water, air, and fire, supplemented by optics and geometric proofs. He noted that Ptolemy's climatic zones had to be adjusted for topography. St. Albert the Great was the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. Roger Bacon was the first to calculate the angular size of the rainbow. He stated that a rainbow summit cannot appear higher than 42 degrees above the horizon. In the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow. By the middle of the sixteenth century, meteorology had developed along two lines: theoretical science based on Meteorologica, and astrological weather forecasting. The pseudoscientific prediction by natural signs became popular and enjoyed protection of the church and princes. This was supported by scientists like Johannes Muller, Leonard Digges, and Johannes Kepler. However, there were skeptics. In the 14th century, Nicole Oresme believed that weather forecasting was possible, but that the rules for it were unknown at the time. Astrological influence in meteorology persisted until the eighteenth century. Gerolamo Cardano's De Subilitate (1550) was the first work to challenge fundamental aspects of Aristotelian theory. Cardano maintained that there were only three basic elements- earth, air, and water. He discounted fire because it needed material to spread and produced nothing. Cardano thought there were two kinds of air: free air and enclosed air. The former destroyed inanimate things and preserved animate things, while the latter had the opposite effect. Rene Descartes's Discourse on the Method (1637) typifies the beginning of the scientific revolution in meteorology. His scientific method had four principles: to never accept anything unless one clearly knew it to be true; to divide every difficult problem into small problems to tackle; to proceed from the simple to the complex, always seeking relationships; to be as complete and thorough as possible with no prejudice. In the appendix Les Meteores, he applied these principles to meteorology. He discussed terrestrial bodies and vapors which arise from them, proceeding to explain the formation of clouds from drops of water, and winds, clouds then dissolving into rain, hail and snow. He also discussed the effects of light on the rainbow. Descartes hypothesized that all bodies were composed of small particles of different shapes and interwovenness. All of his theories were based on this hypothesis. He explained the rain as caused by clouds becoming too large for the air to hold, and that clouds became snow if the air was not warm enough to melt them, or hail if they met colder wind. Like his predecessors, Descartes's method was deductive, as meteorological instruments were not developed and extensively used yet. He introduced the Cartesian coordinate system to meteorology and stressed the importance of mathematics in natural science. His work established meteorology as a legitimate branch of physics. In the 18th century, the invention of the thermometer and barometer allowed for more accurate measurements of temperature and pressure, leading to a better understanding of atmospheric processes. This century also saw the birth of the first meteorological society, the Societas Meteorologica Palatina in 1780. In the 19th century, advances in technology such as the telegraph and photography led to the creation of weather observing networks and the ability to track storms. Additionally, scientists began to use mathematical models to make predictions about the weather. The 20th century saw the development of radar and satellite technology, which greatly improved the ability to observe and track weather systems. In addition, meteorologists and atmospheric scientists started to create the first weather forecasts and temperature predictions. In the 20th and 21st centuries, with the advent of computer models and big data, meteorology has become increasingly dependent on numerical methods and computer simulations. This has greatly improved weather forecasting and climate predictions. Additionally, meteorology has expanded to include other areas such as air quality, atmospheric chemistry, and climatology. The advancement in observational, theoretical and computational technologies has enabled ever more accurate weather predictions and understanding of weather pattern and air pollution. In current time, with the advancement in weather forecasting and satellite technology, meteorology has become an integral part of everyday life, and is used for many purposes such as aviation, agriculture, and disaster management. Instruments and classification scales In 1441, King Sejong's son, Prince Munjong of Korea, invented the first standardized rain gauge. These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer. In 1607, Galileo Galilei constructed a thermoscope. In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." In 1643, Evangelista Torricelli invented the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure. In 1802–1803, Luke Howard wrote On the Modification of Clouds, in which he assigns cloud types Latin names. In 1806, Francis Beaufort introduced his system for classifying wind speeds. Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas, which has remained in print ever since. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally. Atmospheric composition research In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. In 1738, Daniel Bernoulli published Hydrodynamics, initiating the Kinetic theory of gases and established the basic laws for the theory of gases. In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen, which he called phlogisticated air, and together they developed the phlogiston theory. In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," he deprecates the phlogiston theory and proposes a caloric theory. In 1804, John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight. In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics. In 1716, Edmund Halley suggested that aurorae are caused by "magnetic effluvia" moving along the Earth's magnetic field lines. Research into cyclones and air flow In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley. In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane, he decided that cyclones move in a contrary manner to the winds at their periphery. Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones, and introduced the idea of fronts, that is, sharply defined boundaries between air masses. The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes. Observation networks and weather forecasting In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer, barometer, hydrometer, as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. The collected data were sent to Florence at regular time intervals. In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health. During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds. They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus, by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. In 1832, an electromagnetic telegraph was created by Baron Schilling. The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area. This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry. Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams. Charles Kenneth Mackinnon Douglas, known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems. Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected. FitzRoy coined the term "weather forecast" and tried to separate scientific approaches from prophetic ones. Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon. The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University. Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency, began constructing surface weather maps in 1883. The United States Weather Bureau (1890) was established under the United States Department of Agriculture. The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services. Numerical weather prediction In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws. It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability. Starting in the 1950s, numerical forecasts with computers became feasible. The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs. In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury. In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz, founding the field of chaos theory. These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. Mathematical models used to predict the long term weather of the Earth (climate models), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases. Meteorologists Meteorologists are scientists who study and work in the field of meteorology. The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary. Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. In the United States, meteorologists held about 10,000 jobs in 2018. Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman. The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media. Equipment Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge, the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air. Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station, a ship or a weather buoy. The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure, wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. Professional stations may also include air quality sensors (carbon monoxide, carbon dioxide, methane, ozone, dust, and smoke), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor, lightning sensor, microphone (explosions, sonic booms, thunder), pyranometer/pyrheliometer/spectroradiometer (IR/Vis/UV photodiodes), rain gauge/snow gauge, scintillation counter (background radiation, fallout, radon), seismometer (earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging. Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization. Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar, Lidar, and satellites (or photogrammetry). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño. Spatial scales The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale. Other subclassifications are used to describe the unique, local, or broad effects within those subclasses. Microscale Microscale meteorology is the study of atmospheric phenomena on a scale of about or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale. Misoscale meteorology is an informal subdivision. Mesoscale Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary layer, troposphere, tropopause, and the lower section of the stratosphere. Mesoscale timescales last from less than a day to multiple weeks. The events typically of interest are thunderstorms, squall lines, fronts, precipitation bands in tropical and extratropical cyclones, and topographically generated weather systems such as mountain waves and sea and land breezes. Synoptic scale Synoptic scale meteorology predicts atmospheric changes at scales up to 1000 km and 105 sec (28 days), in time and space. At the synoptic scale, the Coriolis acceleration acting on moving air masses (outside of the tropics) plays a dominant role in predictions. The phenomena typically described by synoptic meteorology include events such as extratropical cyclones, baroclinic troughs and ridges, frontal zones, and to some extent jet streams. All of these are typically given on weather maps for a specific time. The minimum horizontal scale of synoptic phenomena is limited to the spacing between surface observation stations. Global scale Global scale meteorology is the study of weather patterns related to the transport of heat from the tropics to the poles. Very large scale oscillations are of importance at this scale. These oscillations have time periods typically on the order of months, such as the Madden–Julian oscillation, or years, such as the El Niño–Southern Oscillation and the Pacific decadal oscillation. Global scale meteorology pushes into the range of climatology. The traditional definition of climate is pushed into larger timescales and with the understanding of the longer time scale global oscillations, their effect on climate and weather disturbances can be included in the synoptic and mesoscale timescales predictions. Numerical Weather Prediction is a main focus in understanding air–sea interaction, tropical meteorology, atmospheric predictability, and tropospheric/stratospheric processes. The Naval Research Laboratory in Monterey, California, developed a global atmospheric model called Navy Operational Global Atmospheric Prediction System (NOGAPS). NOGAPS is run operationally at Fleet Numerical Meteorology and Oceanography Center for the United States Military. Many other global atmospheric models are run by national meteorological agencies. Some meteorological principles Boundary layer meteorology Boundary layer meteorology is the study of processes in the air layer directly above Earth's surface, known as the atmospheric boundary layer (ABL). The effects of the surface – heating, cooling, and friction – cause turbulent mixing within the air layer. Significant movement of heat, matter, or momentum on time scales of less than a day are caused by turbulent motions. Boundary layer meteorology includes the study of all types of surface–atmosphere boundary, including ocean, lake, urban land and non-urban land for the study of meteorology. Dynamic meteorology Dynamic meteorology generally focuses on the fluid dynamics of the atmosphere. The idea of air parcel is used to define the smallest element of the atmosphere, while ignoring the discrete molecular and chemical nature of the atmosphere. An air parcel is defined as an infinitesimal region in the fluid continuum of the atmosphere. The fundamental laws of fluid dynamics, thermodynamics, and motion are used to study the atmosphere. The physical quantities that characterize the state of the atmosphere are temperature, density, pressure, etc. These variables have unique values in the continuum. Applications Weather forecasting Weather forecasting is the application of science and technology to predict the state of the atmosphere at a future time and given location. Humans have attempted to predict the weather informally for millennia and formally since at least the 19th century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve. Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. Human input is still required to pick the best possible forecast model to base the forecast upon, which involves pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus help narrow the error and pick the most likely outcome. There are a variety of end uses to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. On an everyday basis, people use weather forecasts to determine what to wear. Since outdoor activities are severely curtailed by heavy rain, snow, and wind chill, forecasts can be used to plan activities around these events, and to plan ahead and survive them. Aviation meteorology Aviation meteorology deals with the impact of weather on air traffic management. It is important for air crews to understand the implications of weather on their flight plan as well as their aircraft, as noted by the Aeronautical Information Manual: The effects of ice on aircraft are cumulative—thrust is reduced, drag increases, lift lessens, and weight increases. The results are an increase in stall speed and a deterioration of aircraft performance. In extreme cases, 2 to 3 inches of ice can form on the leading edge of the airfoil in less than 5 minutes. It takes but 1/2 inch of ice to reduce the lifting power of some aircraft by 50 percent and increases the frictional drag by an equal percentage. Agricultural meteorology Meteorologists, soil scientists, agricultural hydrologists, and agronomists are people concerned with studying the effects of weather and climate on plant distribution, crop yield, water-use efficiency, phenology of plant and animal development, and the energy balance of managed and natural ecosystems. Conversely, they are interested in the role of vegetation on climate and weather. Hydrometeorology Hydrometeorology is the branch of meteorology that deals with the hydrologic cycle, the water budget, and the rainfall statistics of storms. A hydrometeorologist prepares and issues forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding. Typically the range of knowledge that is required overlaps with climatology, mesoscale and synoptic meteorology, and other geosciences. The multidisciplinary nature of the branch can result in technical challenges, since tools and solutions from each of the individual disciplines involved may behave slightly differently, be optimized for different hard- and software platforms and use different data formats. There are some initiatives – such as the DRIHM project – that are trying to address this issue. Nuclear meteorology Nuclear meteorology investigates the distribution of radioactive aerosols and gases in the atmosphere. Maritime meteorology Maritime meteorology deals with air and wave forecasts for ships operating at sea. Organizations such as the Ocean Prediction Center, Honolulu National Weather Service forecast office, United Kingdom Met Office, KNMI and JMA prepare high seas forecasts for the world's oceans. Military meteorology Military meteorology is the research and application of meteorology for military purposes. In the United States, the United States Navy's Commander, Naval Meteorology and Oceanography Command oversees meteorological efforts for the Navy and Marine Corps while the United States Air Force's Air Force Weather Agency is responsible for the Air Force and Army. Environmental meteorology Environmental meteorology mainly analyzes industrial pollution dispersion physically and chemically based on meteorological parameters such as temperature, humidity, wind, and various weather conditions. Renewable energy Meteorology applications in renewable energy includes basic research, "exploration," and potential mapping of wind power and solar radiation for wind and solar energy.
Physical sciences
Meteorology: General
null
19916
https://en.wikipedia.org/wiki/Meitnerium
Meitnerium
Meitnerium is a synthetic chemical element; it has symbol Mt and atomic number 109. It is an extremely radioactive synthetic element (an element not found in nature, but can be created in a laboratory). The most stable known isotope, meitnerium-278, has a half-life of 4.5 seconds, although the unconfirmed meitnerium-282 may have a longer half-life of 67 seconds. The element was first synthesized in August 1982 by the GSI Helmholtz Centre for Heavy Ion Research near Darmstadt, Germany, and it was named after Lise Meitner in 1997. In the periodic table, meitnerium is a d-block transactinide element. It is a member of the 7th period and is placed in the group 9 elements, although no chemical experiments have yet been carried out to confirm that it behaves as the heavier homologue to iridium in group 9 as the seventh member of the 6d series of transition metals. Meitnerium is calculated to have properties similar to its lighter homologues, cobalt, rhodium, and iridium. Introduction History Discovery Meitnerium was first synthesized on August 29, 1982, by a German research team led by Peter Armbruster and Gottfried Münzenberg at the Institute for Heavy Ion Research (Gesellschaft für Schwerionenforschung) in Darmstadt. The team bombarded a target of bismuth-209 with accelerated nuclei of iron-58 and detected a single atom of the isotope meitnerium-266: + → + This work was confirmed three years later at the Joint Institute for Nuclear Research at Dubna (then in the Soviet Union). Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, meitnerium should be known as eka-iridium. In 1979, during the Transfermium Wars (but before the synthesis of meitnerium), IUPAC published recommendations according to which the element was to be called unnilennium (with the corresponding symbol of Une), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who either called it "element 109", with the symbol of E109, (109) or even simply 109, or used the proposed name "meitnerium". The naming of meitnerium was discussed in the element naming controversy regarding the names of elements 104 to 109, but meitnerium was the only proposal and thus was never disputed. The name meitnerium (Mt) was suggested by the GSI team in September 1992 in honor of the Austrian physicist Lise Meitner, a co-discoverer of protactinium (with Otto Hahn), and one of the discoverers of nuclear fission. In 1994 the name was recommended by IUPAC, and was officially adopted in 1997. It is thus the only element named specifically after a non-mythological woman (curium being named for both Pierre and Marie Curie). Isotopes Meitnerium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Eight different isotopes of meitnerium have been reported with mass numbers 266, 268, 270, and 274–278, two of which, meitnerium-268 and meitnerium-270, have unconfirmed metastable states. A ninth isotope with mass number 282 is unconfirmed. Most of these decay predominantly through alpha decay, although some undergo spontaneous fission. Stability and half-lives All meitnerium isotopes are extremely unstable and radioactive; in general, heavier isotopes are more stable than the lighter. The most stable known meitnerium isotope, 278Mt, is also the heaviest known; it has a half-life of 4.5 seconds. The unconfirmed 282Mt is even heavier and appears to have a longer half-life of 67 seconds. With a half-life of 0.8 seconds, the next most stable known isotope is 270Mt. The isotopes 276Mt and 274Mt have half-lives of 0.62 and 0.64 seconds respectively. The isotope 277Mt, created as the final decay product of 293Ts for the first time in 2012, was observed to undergo spontaneous fission with a half-life of 5 milliseconds. Preliminary data analysis considered the possibility of this fission event instead originating from 277Hs, for it also has a half-life of a few milliseconds, and could be populated following undetected electron capture somewhere along the decay chain. This possibility was later deemed very unlikely based on observed decay energies of 281Ds and 281Rg and the short half-life of 277Mt, although there is still some uncertainty of the assignment. Regardless, the rapid fission of 277Mt and 277Hs is strongly suggestive of a region of instability for superheavy nuclei with N = 168–170. The existence of this region, characterized by a decrease in fission barrier height between the deformed shell closure at N = 162 and spherical shell closure at N = 184, is consistent with theoretical models. Predicted properties Other than nuclear properties, no properties of meitnerium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that meitnerium and its parents decay very quickly. Properties of meitnerium metal remain unknown and only predictions are available. Chemical Meitnerium is the seventh member of the 6d series of transition metals, and should be much like the platinum group metals. Calculations on its ionization potentials and atomic and ionic radii are similar to that of its lighter homologue iridium, thus implying that meitnerium's basic properties will resemble those of the other group 9 elements, cobalt, rhodium, and iridium. Prediction of the probable chemical properties of meitnerium has not received much attention recently. Meitnerium is expected to be a noble metal. The standard electrode potential for the Mt3+/Mt couple is expected to be 0.8 V. Based on the most stable oxidation states of the lighter group 9 elements, the most stable oxidation states of meitnerium are predicted to be the +6, +3, and +1 states, with the +3 state being the most stable in aqueous solutions. In comparison, rhodium and iridium show a maximum oxidation state of +6, while the most stable states are +4 and +3 for iridium and +3 for rhodium. The oxidation state +9, represented only by iridium in [IrO4]+, might be possible for its congener meitnerium in the nonafluoride (MtF9) and the [MtO4]+ cation, although [IrO4]+ is expected to be more stable than these meitnerium compounds. The tetrahalides of meitnerium have also been predicted to have similar stabilities to those of iridium, thus also allowing a stable +4 state. It is further expected that the maximum oxidation states of elements from bohrium (element 107) to darmstadtium (element 110) may be stable in the gas phase but not in aqueous solution. Physical and atomic Meitnerium is expected to be a solid under normal conditions and assume a face-centered cubic crystal structure, similarly to its lighter congener iridium. It should be a very heavy metal with a density of around 27–28 g/cm3, which would be among the highest of any of the 118 known elements. Meitnerium is also predicted to be paramagnetic. Theoreticians have predicted the covalent radius of meitnerium to be 6 to 10 pm larger than that of iridium. The atomic radius of meitnerium is expected to be around 128 pm. Experimental chemistry Meitnerium is the first element on the periodic table whose chemistry has not yet been investigated. Unambiguous determination of the chemical characteristics of meitnerium has yet to have been established due to the short half-lives of meitnerium isotopes and a limited number of likely volatile compounds that could be studied on a very small scale. One of the few meitnerium compounds that are likely to be sufficiently volatile is meitnerium hexafluoride (), as its lighter homologue iridium hexafluoride () is volatile above 60 °C and therefore the analogous compound of meitnerium might also be sufficiently volatile; a volatile octafluoride () might also be possible. For chemical studies to be carried out on a transactinide, at least four atoms must be produced, the half-life of the isotope used must be at least 1 second, and the rate of production must be at least one atom per week. Even though the half-life of 278Mt, the most stable confirmed meitnerium isotope, is 4.5 seconds, long enough to perform chemical studies, another obstacle is the need to increase the rate of production of meitnerium isotopes and allow experiments to carry on for weeks or months so that statistically significant results can be obtained. Separation and detection must be carried out continuously to separate out the meitnerium isotopes and have automated systems experiment on the gas-phase and solution chemistry of meitnerium, as the yields for heavier elements are predicted to be smaller than those for lighter elements; some of the separation techniques used for bohrium and hassium could be reused. However, the experimental chemistry of meitnerium has not received as much attention as that of the heavier elements from copernicium to livermorium. The Lawrence Berkeley National Laboratory attempted to synthesize the isotope 271Mt in 2002–2003 for a possible chemical investigation of meitnerium, because it was expected that it might be more stable than nearby isotopes due to having 162 neutrons, a magic number for deformed nuclei; its half-life was predicted to be a few seconds, long enough for a chemical investigation. However, no atoms of 271Mt were detected; this isotope of meitnerium is currently unknown. An experiment determining the chemical properties of a transactinide would need to compare a compound of that transactinide with analogous compounds of some of its lighter homologues: for example, in the chemical characterization of hassium, hassium tetroxide (HsO4) was compared with the analogous osmium compound, osmium tetroxide (OsO4). In a preliminary step towards determining the chemical properties of meitnerium, the GSI attempted sublimation of the rhodium compounds rhodium(III) oxide (Rh2O3) and rhodium(III) chloride (RhCl3). However, macroscopic amounts of the oxide would not sublimate until 1000 °C and the chloride would not until 780 °C, and then only in the presence of carbon aerosol particles: these temperatures are far too high for such procedures to be used on meitnerium, as most of the current methods used for the investigation of the chemistry of superheavy elements do not work above 500 °C. Following the 2014 successful synthesis of seaborgium hexacarbonyl, Sg(CO)6, studies were conducted with the stable transition metals of groups 7 through 9, suggesting that carbonyl formation could be extended to further probe the chemistries of the early 6d transition metals from rutherfordium to meitnerium inclusive. Nevertheless, the challenges of low half-lives and difficult production reactions make meitnerium difficult to access for radiochemists, though the isotopes 278Mt and 276Mt are long-lived enough for chemical research and may be produced in the decay chains of 294Ts and 288Mc respectively. 276Mt is likely more suitable, since producing tennessine requires a rare and rather short-lived berkelium target. The isotope 270Mt, observed in the decay chain of 278Nh with a half-life of 0.69 seconds, may also be sufficiently long-lived for chemical investigations, though a direct synthesis route leading to this isotope and more precise measurements of its decay properties would be required.
Physical sciences
Group 9
Chemistry
19918
https://en.wikipedia.org/wiki/Megabyte
Megabyte
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB. The unit prefix mega is a multiplier of (106) in the International System of Units (SI). Therefore, one megabyte is one million bytes of information. This definition has been incorporated into the International System of Quantities. In the computer and information technology fields, other definitions have been used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as (220 B), a quantity that conveniently expresses the binary architecture of digital computer memory. Standards bodies have deprecated this binary usage of the mega- prefix in favor of a new set of binary prefixes, by means of which the quantity 220 B is named mebibyte (symbol MiB). Definitions The unit megabyte is commonly used for 10002 (one million) bytes or 10242 bytes. The interpretation of using base 1024 originated as technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 (210) approximates 1000 (103), roughly corresponding to the SI prefix kilo-, it was a convenient term to denote the binary multiple. In 1999, the International Electrotechnical Commission (IEC) published standards for binary prefixes requiring the use of megabyte to denote 10002 bytes, and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST. Nevertheless, the term megabyte continues to be widely used with different meanings. Base 10 1 MB = bytes (= 10002 B = 106 B) is the definition following the rules of the International System of Units (SI), and the International Electrotechnical Commission (IEC). This definition is used in computer networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The Mac OS X 10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units. In this convention, one thousand megabytes (1000 MB) is equal to one gigabyte (1 GB), where 1 GB is one billion bytes. Base 2 1 MB = bytes (= 10242 B = 220 B) is the definition used by Microsoft Windows in reference to computer memory, such as random-access memory (RAM). This definition is synonymous with the unambiguous binary unit mebibyte. In this convention, one thousand and twenty-four megabytes (1024 MB) is equal to one gigabyte (1 GB), where 1 GB is 10243 bytes (i.e., 1 GiB). Mixed 1 MB = bytes (= 1000×1024 B) is the definition used to describe the formatted capacity of the 1.44 MB HD floppy disk, which actually has a capacity of . Randomly addressable semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two. The capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size. Examples of use Depending on compression methods and file format, a megabyte of data can roughly be: a 1megapixel bitmap image (e.g. ~1152 × 864) with 256 colors (8 bits/pixel color depth) stored without any compression. 6seconds of 44.1 kHz/16 bit uncompressed CD audio. 1minute of 128kbit/s MP3 lossy compressed audio. a typical English book volume in plain text format (500 pages × 2000 characters per page). The novel The Picture of Dorian Gray, by Oscar Wilde, hosted on Project Gutenberg as an uncompressed plain text file, is 0.429 MB. Great Expectations is 0.994 MB, and Moby Dick is 1.192 MB. The human genome consists of DNA representing 800MB of data. The parts that differentiate one person from another can be compressed to 4MB.
Physical sciences
Information
Basics and measurement
19919
https://en.wikipedia.org/wiki/Monosaccharide
Monosaccharide
Monosaccharides (from Greek monos: single, sacchar: sugar), also called simple sugars, are the simplest forms of sugar and the most basic units (monomers) from which all carbohydrates are built. Chemically, monosaccharides are polyhydroxy aldehydes with the formula or polyhydroxy ketones with the formula with three or more carbon atoms. They are usually colorless, water-soluble, and crystalline organic solids. Contrary to their name (sugars), only some monosaccharides have a sweet taste. Most monosaccharides have the formula (CH2O)x (though not all molecules with this formula are monosaccharides). Examples of monosaccharides include glucose (dextrose), fructose (levulose), and galactose. Monosaccharides are the building blocks of disaccharides (such as sucrose, lactose and maltose) and polysaccharides (such as cellulose and starch). The table sugar used in everyday vernacular is itself a disaccharide sucrose comprising one molecule of each of the two monosaccharides -glucose and -fructose. Each carbon atom that supports a hydroxyl group is chiral, except those at the end of the chain. This gives rise to a number of isomeric forms, all with the same chemical formula. For instance, galactose and glucose are both aldohexoses, but have different physical structures and chemical properties. The monosaccharide glucose plays a pivotal role in metabolism, where the chemical energy is extracted through glycolysis and the citric acid cycle to provide energy to living organisms. Maltose is the dehydration condensate of two glucose molecules. Structure and nomenclature With few exceptions (e.g., deoxyribose), monosaccharides have the chemical formula (CH2O)x, where conventionally x ≥ 3. Monosaccharides can be classified by the number x of carbon atoms they contain: triose (3), tetrose (4), pentose (5), hexose (6), heptose (7), and so on. Glucose, used as an energy source and for the synthesis of starch, glycogen and cellulose, is a hexose. Ribose and deoxyribose (in RNA and DNA, respectively) are pentose sugars. Examples of heptoses include the ketoses mannoheptulose and sedoheptulose. Monosaccharides with eight or more carbons are rarely observed as they are quite unstable. In aqueous solutions monosaccharides exist as rings if they have more than four carbons. Linear-chain monosaccharides Simple monosaccharides have a linear and unbranched carbon skeleton with one carbonyl (C=O) functional group, and one hydroxyl (OH) group on each of the remaining carbon atoms. Therefore, the molecular structure of a simple monosaccharide can be written as H(CHOH)n(C=O)(CHOH)mH, where ; so that its elemental formula is CxH2xOx. By convention, the carbon atoms are numbered from 1 to x along the backbone, starting from the end that is closest to the C=O group. Monosaccharides are the simplest units of carbohydrates and the simplest form of sugar. If the carbonyl is at position 1 (that is, n or m is zero), the molecule begins with a formyl group H(C=O)− and is technically an aldehyde. In that case, the compound is termed an aldose. Otherwise, the molecule has a ketone group, a carbonyl −(C=O)− between two carbons; then it is formally a ketone, and is termed a ketose. Ketoses of biological interest usually have the carbonyl at position 2. The various classifications above can be combined, resulting in names such as "aldohexose" and "ketotriose". A more general nomenclature for open-chain monosaccharides combines a Greek prefix to indicate the number of carbons (tri-, tetr-, pent-, hex-, etc.) with the suffixes "-ose" for aldoses and "-ulose" for ketoses. In the latter case, if the carbonyl is not at position 2, its position is then indicated by a numeric infix. So, for example, H(C=O)(CHOH)4H is pentose, H(CHOH)(C=O)(CHOH)3H is pentulose, and H(CHOH)2(C=O)(CHOH)2H is pent-3-ulose. Open-chain stereoisomers Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers, whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center, specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness. In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group. For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH)2H (glyceraldehyde), has one chiral carbon—the central one, number 2—which is bonded to groups −H, −OH, −C(OH)H2, and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2c, where c is the total number of chiral carbons. The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons). Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature. While most stereoisomers can be arranged in pairs of mirror-image forms, there are some non-chiral stereoisomers that are identical to their mirror images, in spite of having chiral centers. This happens whenever the molecular graph is symmetrical, as in the 3-ketopentoses H(CHOH)2(CO)(CHOH)2H, and the two halves are mirror images of each other. In that case, mirroring is equivalent to a half-turn rotation. For this reason, there are only three distinct 3-ketopentose stereoisomers, even though the molecule has two chiral carbons. Distinct stereoisomers that are not mirror-images of each other usually have different chemical properties, even in non-chiral environments. Therefore, each mirror pair and each non-chiral stereoisomer may be given a specific monosaccharide name. For example, there are 16 distinct aldohexose stereoisomers, but the name "glucose" means a specific pair of mirror-image aldohexoses. In the Fischer projection, one of the two glucose isomers has the hydroxyl at left on C3, and at right on C4 and C5; while the other isomer has the reversed pattern. These specific monosaccharide names have conventional three-letter abbreviations, like "Glu" for glucose and "Thr" for threose. Generally, a monosaccharide with n asymmetrical carbons has 2n stereoisomers. The number of open chain stereoisomers for an aldose monosaccharide is larger by one than that of a ketose monosaccharide of the same length. Every ketose will have 2(n−3) stereoisomers where n > 2 is the number of carbons. Every aldose will have 2(n−2) stereoisomers where n > 2 is the number of carbons. These are also referred to as epimers which have the different arrangement of −OH and −H groups at the asymmetric or chiral carbon atoms (this does not apply to those carbons having the carbonyl functional group). Configuration of monosaccharides Like many chiral molecules, the two stereoisomers of glyceraldehyde will gradually rotate the polarization direction of linearly polarized light as it passes through it, even in solution. The two stereoisomers are identified with the prefixes - and -, according to the sense of rotation: -glyceraldehyde is dextrorotatory (rotates the polarization axis clockwise), while -glyceraldehyde is levorotatory (rotates it counterclockwise). The - and - prefixes are also used with other monosaccharides, to distinguish two particular stereoisomers that are mirror-images of each other. For this purpose, one considers the chiral carbon that is furthest removed from the C=O group. Its four bonds must connect to −H, −OH, −CH2(OH), and the rest of the molecule. If the molecule can be rotated in space so that the directions of those four groups match those of the analog groups in -glyceraldehyde's C2, then the isomer receives the - prefix. Otherwise, it receives the - prefix. In the Fischer projection, the - and - prefixes specifies the configuration at the carbon atom that is second from bottom: - if the hydroxyl is on the right side, and - if it is on the left side. Note that the - and - prefixes do not indicate the direction of rotation of polarized light, which is a combined effect of the arrangement at all chiral centers. However, the two enantiomers will always rotate the light in opposite directions, by the same amount.
Biology and health sciences
Carbohydrates
Biology
19924
https://en.wikipedia.org/wiki/Microscopium
Microscopium
Microscopium ("the Microscope") is a minor constellation in the southern celestial hemisphere, one of twelve created in the 18th century by French astronomer Nicolas-Louis de Lacaille and one of several depicting scientific instruments. The name is a Latinised form of the Greek word for microscope. Its stars are faint and hardly visible from most of the non-tropical Northern Hemisphere. The constellation's brightest star is Gamma Microscopii of apparent magnitude 4.68, a yellow giant 2.5 times the Sun's mass located 223 ± 8 light-years distant. It passed within 1.14 and 3.45 light-years of the Sun some 3.9 million years ago, possibly disturbing the outer Solar System. Three star systems—WASP-7, AU Microscopii and HD 205739—have been determined to have planets, while other star —the Sun-like star HD 202628— has a debris disk. AU Microscopii and the binary red dwarf system AT Microscopii are probably a wide triple system and members of the Beta Pictoris moving group. Nicknamed "Speedy Mic", BO Microscopii is a star with an extremely fast rotation period of 9 hours, 7 minutes. Characteristics Microscopium is a small constellation bordered by Capricornus to the north, Piscis Austrinus and Grus to the east, Sagittarius to the west, and Indus to the south, touching on Telescopium to the southwest. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Mic". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −27.45° and −45.09°. The whole constellation is visible to observers south of latitude 45°N. Given that its brightest stars are of fifth magnitude, the constellation is invisible to the naked eye in areas with light polluted skies. Features Stars French astronomer Nicolas-Louis de Lacaille charted and designated ten stars with the Bayer designations Alpha through to Iota in 1756. A star in neighbouring Indus that Lacaille had labelled Nu Indi turned out to be in Microscopium, so Gould renamed it Nu Microscopii. Francis Baily considered Gamma and Epsilon Microscopii to belong to the neighbouring constellation Piscis Austrinus, but subsequent cartographers did not follow this. In his 1725 Catalogus Britannicus, John Flamsteed labelled the stars 1, 2, 3 and 4 Piscis Austrini, which became Gamma Microscopii, HR 8076, HR 8110 and Epsilon Microscopii respectively. Within the constellation's borders, there are 43 stars brighter than or equal to apparent magnitude 6.5. Depicting the eyepiece of the microscope is Gamma Microscopii, which—at magnitude of 4.68—is the brightest star in the constellation. Having spent much of its 620-million-year lifespan as a blue-white main sequence star, it has swollen and cooled to become a yellow giant of spectral type G6III, with a diameter ten times that of the Sun. Measurement of its parallax yields a distance of 223 ± 8 light years from Earth. It likely passed within 1.14 and 3.45 light-years of the Sun some 3.9 million years ago, at around 2.5 times the mass of the Sun, it is possibly massive enough and close enough to disturb the Oort cloud. Alpha Microscopii is also an ageing yellow giant star of spectral type G7III with an apparent magnitude of 4.90. Located 400 ± 30 light-years away from Earth, it has swollen to 17.5 times the diameter of the Sun. Alpha has a 10th magnitude companion, visible in 7.5 cm telescopes, though this is a coincidental closeness rather than a true binary system. Epsilon Microscopii lies 166 ± 5 light-years away, and is a white star of apparent magnitude 4.7, and spectral type A1V. Theta1 and Theta2 Microscopii make up a wide double whose components are splittable to the naked eye. Both are white A-class magnetic spectrum variable stars with strong metallic lines, similar to Cor Caroli. They mark the constellation's specimen slide. Many notable objects are too faint to be seen with the naked eye. AX Microscopii, better known as Lacaille 8760, is a red dwarf which lies only 12.9 light-years from the Solar System. At magnitude 6.68, it is the brightest red dwarf in the sky. BO Microscopii is a rapidly rotating star that has 80% the diameter of the Sun. Nicknamed "Speedy Mic", it has a rotation period of 9 hours 7 minutes. An active star, it has prominent stellar flares that average 100 times stronger than those of the Sun, and are emitting energy mainly in the X-ray and ultraviolet bands of the spectrum. It lies 218 ± 4 light-years away from the Sun. AT Microscopii is a binary star system, both members of which are flare star red dwarfs. The system lies close to and may form a very wide triple system with AU Microscopii, a young star which has a planetary system in the making with a debris disk. The three stars are candidate members of the Beta Pictoris moving group, one of the nearest associations of stars that share a common motion through space. The Astronomical Society of Southern Africa in 2003 reported that observations of four of the Mira variables in Microscopium were very urgently needed as data on their light curves was incomplete. Two of them—R and S Microscopii—are challenging stars for novice amateur astronomers, and the other two, U and RY Microscopii, are more difficult still. Another red giant, T Microscopii, is a semiregular variable that ranges between magnitudes 7.7 and 9.6 over 344 days. Of apparent magnitude 11, DD Microscopii is a symbiotic star system composed of an orange giant of spectral type K2III and white dwarf in close orbit, with the smaller star ionizing the stellar wind of the larger star. The system has a low metallicity. Combined with its high galactic latitude, this indicates that the star system has its origin in the galactic halo of the Milky Way. HD 205739 is a yellow-white main sequence star of spectral type F7V that is around 1.22 times as massive and 2.3 times as luminous as the Sun. It has a Jupiter-sized planet with an orbital period of 280 days that was discovered by the radial velocity method. WASP-7 is a star of spectral type F5V with an apparent magnitude of 9.54, about 1.28 times as massive as the Sun. Its hot Jupiter planet—WASP-7b—was discovered by transit method and found to orbit the star every 4.95 days. HD 202628 is a sunlike star of spectral type G2V with a debris disk that ranges from 158 to 220 AU distant. Its inner edge is sharply defined, indicating a probable planet orbiting between 86 and 158 AU from the star. Deep sky objects Describing Microscopium as "totally unremarkable", astronomer Patrick Moore concluded there was nothing of interest for amateur observers. NGC 6925 is a barred spiral galaxy of apparent magnitude 11.3 which is lens-shaped, as it lies almost edge-on to observers on Earth, 3.7 degrees west-northwest of Alpha Microscopii. SN 2011ei, a Type II Supernova in NGC 6925, was discovered by Stu Parker in New Zealand in July 2011. NGC 6923 lies nearby and is a magnitude fainter still. The Microscopium Void is a roughly rectangular region of relatively empty space, bounded by incomplete sheets of galaxies from other voids. The Microscopium Supercluster is an overdensity of galaxy clusters that was first noticed in the early 1990s. The component Abell clusters 3695 and 3696 are likely to be gravitationally bound, while the relations of Abell clusters 3693 and 3705 in the same field are unclear. Meteor showers The Microscopids are a minor meteor shower that appear from June to mid-July. History Microscopium lies in a region where Ptolemy had listed six 'unformed' stars behind the tail of Piscis Austrinus. Al-Sufi did not include these stars in his revision of the Almagest, presumably because he could not identify them. Microscopium was introduced in 1751–52 by Lacaille with the French name le Microscope, after he had observed and catalogued 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Commemorating the compound microscope, the Microscope's name had been Latinised by Lacaille to Microscopium by 1763.
Physical sciences
Other
Astronomy
19925
https://en.wikipedia.org/wiki/IC%20342/Maffei%20Group
IC 342/Maffei Group
The IC 342/Maffei Group (also known as the IC 342 Group or the Maffei 1 Group) corresponds to one or two galaxy groups close to the Local Group. The member galaxies are mostly concentrated around either IC 342 or Maffei 1, which would be the brightest two galaxies in the group. The group is part of the Virgo Supercluster. However, recent studies have found that the two subgroups are unrelated; while the IC 342 group is the nearest galaxy group to the Milky Way, the Maffei 1 group is several times farther away, and is not gravitationally bound to the IC 342 group. Members The table below lists galaxies that have been identified as associated with the IC342/Maffei 1 Group by I. D. Karachentsev. Note that Karachentsev divides this group into two subgroups centered around IC 342 and Maffei 1. Additionally, KKH 37 is listed as possibly being a member of the IC 342 Subgroup, and KKH 6 is listed as possibly being a member of the Maffei 1 Subgroup. Foreground dust obscuration As seen from Earth, the group lies near the plane of the Milky Way (a region sometimes called the Zone of Avoidance). Consequently, the light from many of the galaxies is severely affected by dust obscuration within the Milky Way. This complicates observational studies of the group, as uncertainties in the dust obscuration also affect measurements of the galaxies' luminosities and distances as well as other related quantities. Moreover, the galaxies within the group have historically been difficult to identify. Many galaxies have only been discovered using late 20th century astronomical instrumentation. For example, Maffei 1 and Maffei 2 were only discovered in 1968 using infrared photographic images of the region. Furthermore, it is difficult to determine whether some objects near IC 342 or Maffei 1 are galaxies associated with the IC 342/Maffei Group or diffuse foreground objects within the Milky Way that merely look like galaxies. For example, the objects MB 2 and Camelopardalis C were once thought to be dwarf galaxies in the IC 342/Maffei Group but are now known to be objects within the Milky Way. Group formation and possible interactions with the Local Group Since the IC 342/Maffei Group and the Local Group are located physically close to each other, the two groups may have influenced each other's evolution during the early stages of galaxy formation. An analysis of the velocities and distances to the IC 342/Maffei Group as measured by M. J. Valtonen and collaborators suggested that IC 342 and Maffei 1 were moving faster than what could be accounted for in the expansion of the universe. They therefore suggested that IC 342 and Maffei 1 were ejected from the Local Group after a violent gravitational interaction with the Andromeda Galaxy during the early stages of the formation of the two groups. However, this interpretation is dependent on the distances measured to the galaxies in the group, which in turn is dependent on accurately measuring the degree to which interstellar dust in the Milky Way obscures the group. More recent observations have demonstrated that the dust obscuration may have been previously overestimated, so the distances may have been underestimated. If these new distance measurements are correct, then the galaxies in the IC 342/Maffei Group appear to be moving at the rate expected from the expansion of the universe, and the scenario of a collision between the IC 342/Maffei Group and the Local Group would be implausible.
Physical sciences
Notable galaxy clusters
Astronomy
19926
https://en.wikipedia.org/wiki/M81%20Group
M81 Group
The M81 Group is a galaxy group in the constellations Ursa Major and Camelopardalis that includes the galaxies Messier 81 and Messier 82, as well as several other galaxies with high apparent brightnesses. The approximate center of the group is located at a distance of 3.6 Mpc, making it one of the nearest groups to the Local Group. The group is estimated to have a total mass of (1.03 ± 0.17). The M81 Group, the Local Group, and other nearby groups all lie within the Virgo Supercluster (i.e. the Local Supercluster). Members The table below lists galaxies that have been identified as associated with the M81 Group by I. D. Karachentsev. Note that the object names used in the above table differ from the names used by Karachentsev. NGC, IC, UGC, and PGC numbers have been used in many cases to allow for easier referencing. Interactions within the group Messier 81, Messier 82, and NGC 3077 are all strongly interacting with each other. Observations of the 21-centimeter hydrogen line indicate how the galaxies are connected. The gravitational interactions have stripped some hydrogen gas away from all three galaxies, leading to the formation of filamentary gas structures within the group. Bridges of neutral hydrogen have been shown to connect M81 with M82 and NGC 3077. Moreover, the interactions have also caused some interstellar gas to fall into the centers of Messier 82 and NGC 3077, which has led to strong starburst activity (or the formation of many stars) within the centers of these two galaxies. Computer simulations of tidal interactions have been used to show how the current structure of the group could have been created. Gallery
Physical sciences
Notable galaxy clusters
Astronomy
19937
https://en.wikipedia.org/wiki/Meteorite
Meteorite
A meteorite is a rock that originated in outer space and has fallen to the surface of a planet or moon. When the original object enters the atmosphere, various factors such as friction, pressure, and chemical interactions with the atmospheric gases cause it to heat up and radiate energy. It then becomes a meteor and forms a fireball, also known as a shooting star; astronomers call the brightest examples "bolides". Once it settles on the larger body's surface, the meteor becomes a meteorite. Meteorites vary greatly in size. For geologists, a bolide is a meteorite large enough to create an impact crater. Meteorites that are recovered after being observed as they transit the atmosphere and impact Earth are called meteorite falls. All others are known as meteorite finds. Meteorites have traditionally been divided into three broad categories: stony meteorites that are rocks, mainly composed of silicate minerals; iron meteorites that are largely composed of ferronickel; and stony-iron meteorites that contain large amounts of both metallic and rocky material. Modern classification schemes divide meteorites into groups according to their structure, chemical and isotopic composition and mineralogy. "Meteorites" less than ~1 mm in diameter are classified as micrometeorites, however micrometeorites differ from meteorites in that they typically melt completely in the atmosphere and fall to Earth as quenched droplets. Extraterrestrial meteorites have been found on the Moon and on Mars. Most space rocks crashing into Earth come from a single source. The origin of most meteorites can be traced to just a handful of asteroid breakup events – and possibly even individual asteroids. Fall phenomena Most meteoroids disintegrate when entering the Earth's atmosphere. Usually, five to ten a year are observed to fall and are subsequently recovered and made known to scientists. Few meteorites are large enough to create large impact craters. Instead, they typically arrive at the surface at their terminal velocity and, at most, create a small pit. Large meteoroids may strike the earth with a significant fraction of their escape velocity (second cosmic velocity), leaving behind a hypervelocity impact crater. The kind of crater will depend on the size, composition, degree of fragmentation, and incoming angle of the impactor. The force of such collisions has the potential to cause widespread destruction. The most frequent hypervelocity cratering events on the Earth are caused by iron meteoroids, which are most easily able to transit the atmosphere intact. Examples of craters caused by iron meteoroids include Barringer Meteor Crater, Odessa Meteor Crater, Wabar craters, and Wolfe Creek crater; iron meteorites are found in association with all of these craters. In contrast, even relatively large stony or icy bodies such as small comets or asteroids, up to millions of tons, are disrupted in the atmosphere, and do not make impact craters. Although such disruption events are uncommon, they can cause a considerable concussion to occur; the famed Tunguska event probably resulted from such an incident. Very large stony objects, hundreds of meters in diameter or more, weighing tens of millions of tons or more, can reach the surface and cause large craters but are very rare. Such events are generally so energetic that the impactor is completely destroyed, leaving no meteorites. (The first example of a stony meteorite found in association with a large impact crater, the Morokweng impact structure in South Africa, was reported in May 2006.) Several phenomena are well documented during witnessed meteorite falls too small to produce hypervelocity craters. The fireball that occurs as the meteoroid passes through the atmosphere can appear to be very bright, rivaling the sun in intensity, although most are far dimmer and may not even be noticed during the daytime. Various colors have been reported, including yellow, green, and red. Flashes and bursts of light can occur as the object breaks up. Explosions, detonations, and rumblings are often heard during meteorite falls, which can be caused by sonic booms as well as shock waves resulting from major fragmentation events. These sounds can be heard over wide areas, with a radius of a hundred or more kilometers. Whistling and hissing sounds are also sometimes heard but are poorly understood. Following the passage of the fireball, it is not unusual for a dust trail to linger in the atmosphere for several minutes. As meteoroids are heated during atmospheric entry, their surfaces melt and experience ablation. They can be sculpted into various shapes during this process, sometimes resulting in shallow thumbprint-like indentations on their surfaces called regmaglypts. If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical "nose cone" or "heat shield" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites, the fusion crust may be very light-colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to below the surface. Reports vary; some meteorites are reported to be "burning hot to the touch" upon landing, while others are alleged to have been cold enough to condense water and form a frost. Meteoroids that disintegrate in the atmosphere may fall as meteorite showers, which can range from only a few up to thousands of separate individuals. The area over which a meteorite shower falls is known as its strewn field. Strewn fields are commonly elliptical in shape, with the major axis parallel to the direction of flight. In most cases, the largest meteorites in a shower are found farthest down-range in the strewn field. Classification Most meteorites are stony meteorites, classed as chondrites and achondrites. Only about 6% of meteorites are iron meteorites or a blend of rock and metal, the stony-iron meteorites. Modern classification of meteorites is complex. The review paper of Krot et al. (2007) summarizes modern meteorite taxonomy. About 86% of the meteorites are chondrites, which are named for the small, round particles they contain. These particles, or chondrules, are composed mostly of silicate minerals that appear to have been melted while they were free-floating objects in space. Certain types of chondrites also contain small amounts of organic matter, including amino acids, and presolar grains. Chondrites are typically about 4.55 billion years old and are thought to represent material from the asteroid belt that never coalesced into large bodies. Like comets, chondritic asteroids are some of the oldest and most primitive materials in the Solar System. Chondrites are often considered to be "the building blocks of the planets". About 8% of the meteorites are achondrites (meaning they do not contain chondrules), some of which are similar to terrestrial igneous rocks. Most achondrites are also ancient rocks, and are thought to represent crustal material of differentiated planetesimals. One large family of achondrites (the HED meteorites) may have originated on the parent body of the Vesta Family, although this claim is disputed. Others derive from unidentified asteroids. Two small groups of achondrites are special, as they are younger and do not appear to come from the asteroid belt. One of these groups comes from the Moon, and includes rocks similar to those brought back to Earth by Apollo and Luna programs. The other group is almost certainly from Mars and constitutes the only materials from other planets ever recovered by humans. About 5% of meteorites that have been seen to fall are iron meteorites composed of iron-nickel alloys, such as kamacite and/or taenite. Most iron meteorites are thought to come from the cores of planetesimals that were once molten. As with the Earth, the denser metal separated from silicate material and sank toward the center of the planetesimal, forming its core. After the planetesimal solidified, it broke up in a collision with another planetesimal. Due to the low abundance of iron meteorites in collection areas such as Antarctica, where most of the meteoric material that has fallen can be recovered, it is possible that the percentage of iron-meteorite falls is lower than 5%. This would be explained by a recovery bias; laypeople are more likely to notice and recover solid masses of metal than most other meteorite types. The abundance of iron meteorites relative to total Antarctic finds is 0.4%. Stony-iron meteorites constitute the remaining 1%. They are a mixture of iron-nickel metal and silicate minerals. One type, called pallasites, is thought to have originated in the boundary zone above the core regions where iron meteorites originated. The other major type of stony-iron meteorites is the mesosiderites. Tektites (from Greek tektos, molten) are not themselves meteorites, but are rather natural glass objects up to a few centimeters in size that were formed—according to most scientists—by the impacts of large meteorites on Earth's surface. A few researchers have favored tektites originating from the Moon as volcanic ejecta, but this theory has lost much of its support over the last few decades. Frequency The diameter of the largest impactor to hit Earth on any given day is likely to be about , in a given year about , and in a given century about . These statistics are obtained by the following: Over at least the range from to roughly , the rate at which Earth receives meteors obeys a power-law distribution as follows: where N (>D) is the expected number of objects larger than a diameter of D meters to hit Earth in a year. This is based on observations of bright meteors seen from the ground and space, combined with surveys of near-Earth asteroids. Above in diameter, the predicted rate is somewhat higher, with a 2 km (1.2 mi) asteroid (one teraton TNT equivalent) every couple of million yearsabout 10 times as often as the power-law extrapolation would predict. Chemistry In 2015, NASA scientists reported that complex organic compounds found in DNA and RNA, including uracil, cytosine, and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine and polycyclic aromatic hydrocarbons (PAHs) may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists. In 2018, researchers found that 4.5 billion-year-old meteorites found on Earth contained liquid water along with prebiotic complex organic substances that may be ingredients for life. In 2019, scientists reported detecting sugar molecules in meteorites for the first time, including ribose, suggesting that chemical processes on asteroids can produce some organic compounds fundamental to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth. In 2022, a Japanese group reported that they had found adenine (A), thymine (T), guanine (G), cytosine (C) and uracil (U) inside carbon-rich meteorites. These compounds are building blocks of DNA and RNA, the genetic code of all life on Earth. These compounds have also occurred spontaneously in laboratory settings emulating conditions in outer space. Sources of meteorites found on Earth Until recently, the source of only about 6% of meteorites had been traced to their sources: the Moon, Mars, and asteroid Vesta. Approximately 70% of meteorites found on Earth now appear to originate from break-ups of three asteroids. Weathering Most meteorites date from the early Solar System and are by far the oldest extant material on Earth. Analysis of terrestrial weathering due to water, salt, oxygen, etc. is used to quantify the degree of alteration that a meteorite has experienced. Several qualitative weathering indices have been applied to Antarctic and desertic samples. The most commonly employed weathering scale, used for ordinary chondrites, ranges from W0 (pristine state) to W6 (heavy alteration). Fossil meteorites "Fossil" meteorites are sometimes discovered by geologists. They represent the highly weathered remains of meteorites that fell to Earth in the remote past and were preserved in sedimentary deposits sufficiently well that they can be recognized through mineralogical and geochemical studies. The Thorsberg limestone quarry in Sweden has produced an anomalously large number – exceeding one hundred – fossil meteorites from the Ordovician, nearly all of which are highly weathered L-chondrites that still resemble the original meteorite under a petrographic microscope, but which have had their original material almost entirely replaced by terrestrial secondary mineralization. The extraterrestrial provenance was demonstrated in part through isotopic analysis of relict spinel grains, a mineral that is common in meteorites, is insoluble in water, and is able to persist chemically unchanged in the terrestrial weathering environment. Scientists believe that these meteorites, which have all also been found in Russia and China, all originated from the same source, a collision that occurred somewhere between Jupiter and Mars. One of these fossil meteorites, dubbed Österplana 065, appears to represent a distinct type of meteorite that is "extinct" in the sense that it is no longer falling to Earth, the parent body having already been completely depleted from the reservoir of near-Earth objects. Collection A "meteorite fall", also called an "observed fall", is a meteorite collected after its arrival was observed by people or automated devices. Any other meteorite is called a "meteorite find". There are more than 1,100 documented falls listed in widely used databases, most of which have specimens in modern collections. , the Meteoritical Bulletin Database had 1,180 confirmed falls. Falls Most meteorite falls are collected on the basis of eyewitness accounts of the fireball or the impact of the object on the ground, or both. Therefore, despite the fact that meteorites fall with virtually equal probability everywhere on Earth, verified meteorite falls tend to be concentrated in areas with higher human population densities such as Europe, Japan, and northern India. A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first of these was the Příbram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite. Following the Příbram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern US. This program also observed a meteorite fall, the Lost City chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, Innisfree, in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Příbram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002. NASA has an automated system that detects meteors and calculates the orbit, magnitude, ground track, and other parameters over the southeast USA, which often detects a number of events each night. Finds Until the twentieth century, only a few hundred meteorite finds had ever been discovered. More than 80% of these were iron and stony-iron meteorites, which are easily distinguished from local rocks. To this day, few stony meteorites are reported each year that can be considered to be "accidental" finds. The reason there are now more than 30,000 meteorite finds in the world's collections started with the discovery by Harvey H. Nininger that meteorites are much more common on the surface of the Earth than was previously thought. Canada Meteorites that land in Canada are protected under the Cultural Property Export and Import Act. In July 2024, a meteorite was recorded by security footage crashing into a residential property in Marshfield, Prince Edward Island. It is believed to be the first time such an event has been captured on camera and the sound of the crash recorded. It was subsequently registered as the Charlottetown meteorite, named after the city near to where it landed. United States Nininger's strategy was to search for meteorites in the Great Plains of the United States, where the land was largely cultivated and the soil contained few rocks. Between the late 1920s and the 1950s, he traveled across the region, educating local people about what meteorites looked like and what to do if they thought they had found one, for example, in the course of clearing a field. The result was the discovery of more than 200 new meteorites, mostly stony types. In the late 1960s, Roosevelt County, New Mexico was found to be a particularly good place to find meteorites. After the discovery of a few meteorites in 1967, a public awareness campaign resulted in the finding of nearly 100 new specimens in the next few years, with many being by a single person, Ivan Wilson. In total, nearly 140 meteorites were found in the region since 1967. In the area of the finds, the ground was originally covered by a shallow, loose soil sitting atop a hardpan layer. During the dustbowl era, the loose soil was blown off, leaving any rocks and meteorites that were present stranded on the exposed surface. Beginning in the mid-1960s, amateur meteorite hunters began scouring the arid areas of the southwestern United States. To date, thousands of meteorites have been recovered from the Mojave, Sonoran, Great Basin, and Chihuahuan Deserts, with many being recovered on dry lake beds. Significant finds include the three-tonne Old Woman meteorite, currently on display at the Desert Discovery Center in Barstow, California, and the Franconia and Gold Basin meteorite strewn fields; hundreds of kilograms of meteorites have been recovered from each. A number of finds from the American Southwest have been submitted with false find locations, as many finders think it is unwise to publicly share that information for fear of confiscation by the federal government and competition with other hunters at published find sites. Several of the meteorites found recently are currently on display in the Griffith Observatory in Los Angeles, and at UCLA's Meteorite Gallery. Antarctica A few meteorites were found in Antarctica between 1912 and 1964. In 1969, the 10th Japanese Antarctic Research Expedition found nine meteorites on a blue ice field near the Yamato Mountains. With this discovery, came the realization that movement of ice sheets might act to concentrate meteorites in certain areas. After a dozen other specimens were found in the same place in 1973, a Japanese expedition was launched in 1974 dedicated to the search for meteorites. This team recovered nearly 700 meteorites. Shortly thereafter, the United States began its own program to search for Antarctic meteorites, operating along the Transantarctic Mountains on the other side of the continent: the Antarctic Search for Meteorites (ANSMET) program. European teams, starting with a consortium called "EUROMET" in the 1990/91 season, and continuing with a program by the Italian Programma Nazionale di Ricerche in Antartide have also conducted systematic searches for Antarctic meteorites. The Antarctic Scientific Exploration of China has conducted successful meteorite searches since 2000. A Korean program (KOREAMET) was launched in 2007 and has collected a few meteorites. The combined efforts of all of these expeditions have produced more than 23,000 classified meteorite specimens since 1974, with thousands more that have not yet been classified. For more information see the article by Harvey (2003). Australia At about the same time as meteorite concentrations were being discovered in the cold desert of Antarctica, collectors discovered that many meteorites could also be found in the hot deserts of Australia. Several dozen meteorites had already been found in the Nullarbor region of Western and South Australia. Systematic searches between about 1971 and the present recovered more than 500 others, ~300 of which are currently well characterized. The meteorites can be found in this region because the land presents a flat, featureless, plain covered by limestone. In the extremely arid climate, there has been relatively little weathering or sedimentation on the surface for tens of thousands of years, allowing meteorites to accumulate without being buried or destroyed. The dark-colored meteorites can then be recognized among the very different looking limestone pebbles and rocks. The Sahara In 1986–87, a German team installing a network of seismic stations while prospecting for oil discovered about 65 meteorites on a flat, desert plain about southeast of Dirj (Daraj), Libya. A few years later, a desert enthusiast saw photographs of meteorites being recovered by scientists in Antarctica, and thought that he had seen similar occurrences in northern Africa. In 1989, he recovered about 100 meteorites from several distinct locations in Libya and Algeria. Over the next several years, he and others who followed found at least 400 more meteorites. The find locations were generally in regions known as regs or hamadas: flat, featureless areas covered only by small pebbles and minor amounts of sand. Dark-colored meteorites can be easily spotted in these places. In the case of several meteorite fields, such as Dar al Gani, Dhofar, and others, favorable light-colored geology consisting of basic rocks (clays, dolomites, and limestones) makes meteorites particularly easy to identify. Although meteorites had been sold commercially and collected by hobbyists for many decades, up to the time of the Saharan finds of the late 1980s and early 1990s, most meteorites were deposited in or purchased by museums and similar institutions where they were exhibited and made available for scientific research. The sudden availability of large numbers of meteorites that could be found with relative ease in places that were readily accessible (especially compared to Antarctica), led to a rapid rise in commercial collection of meteorites. This process was accelerated when, in 1997, meteorites coming from both the Moon and Mars were found in Libya. By the late 1990s, private meteorite-collecting expeditions had been launched throughout the Sahara. Specimens of the meteorites recovered in this way are still deposited in research collections, but most of the material is sold to private collectors. These expeditions have now brought the total number of well-described meteorites found in Algeria and Libya to more than 500. Northwest Africa Meteorite markets came into existence in the late 1990s, especially in Morocco. This trade was driven by Western commercialization and an increasing number of collectors. The meteorites were supplied by nomads and local people who combed the deserts looking for specimens to sell. Many thousands of meteorites have been distributed in this way, most of which lack any information about how, when, or where they were discovered. These are the so-called "Northwest Africa" meteorites. When they get classified, they are named "Northwest Africa" (abbreviated NWA) followed by a number. It is generally accepted that NWA meteorites originate in Morocco, Algeria, Western Sahara, Mali, and possibly even further afield. Nearly all of these meteorites leave Africa through Morocco. Scores of important meteorites, including Lunar and Martian ones, have been discovered and made available to science via this route. A few of the more notable meteorites recovered include Tissint and Northwest Africa 7034. Tissint was the first witnessed Martian meteorite fall in more than fifty years; NWA 7034 is the oldest meteorite known to come from Mars, and is a unique water-bearing regolith breccia. Arabian Peninsula In 1999, meteorite hunters discovered that the desert in southern and central Oman were also favorable for the collection of many specimens. The gravel plains in the Dhofar and Al Wusta regions of Oman, south of the sandy deserts of the Rub' al Khali, had yielded about 5,000 meteorites as of mid-2009. Included among these are a large number of lunar and Martian meteorites, making Oman a particularly important area both for scientists and collectors. Early expeditions to Oman were mainly done by commercial meteorite dealers, however, international teams of Omani and European scientists have also now collected specimens. The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed national treasures. This new law provoked a small international incident, as its implementation preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters, primarily from Russia, but whose party also consisted of members from the US as well as several other European countries. In human affairs Meteorites have figured into human culture since their earliest discovery as ceremonial or religious objects, as the subject of writing about events occurring in the sky and as a source of peril. The oldest known iron artifacts are nine small beads hammered from meteoritic iron. They were found in northern Egypt and have been securely dated to 3200 BC. Ceremonial or religious use Although the use of the metal found in meteorites is also recorded in myths of many countries and cultures where the celestial source was often acknowledged, scientific documentation only began in the last few centuries. Meteorite falls may have been the source of cultish worship. The cult in the Temple of Artemis at Ephesus, one of the Seven Wonders of the Ancient World, possibly originated with the observation and recovery of a meteorite that was understood by contemporaries to have fallen to the earth from Jupiter, the principal Roman deity. There are reports that a sacred stone was enshrined at the temple that may have been a meteorite. The Black Stone set into the wall of the Kaaba has often been presumed to be a meteorite, but the little available evidence for this is inconclusive. Some Native Americans treated meteorites as ceremonial objects. In 1915, a iron meteorite was found in a Sinagua (c. 1100–1200 AD) burial cyst near Camp Verde, Arizona, respectfully wrapped in a feather cloth. A small pallasite was found in a pottery jar in an old burial found at Pojoaque Pueblo, New Mexico. Nininger reports several other such instances, in the Southwest US and elsewhere, such as the discovery of Native American beads of meteoric iron found in Hopewell burial mounds, and the discovery of the Winona meteorite in a Native American stone-walled crypt. Historical writings In medieval China during the Song dynasty, a meteorite strike event was recorded by Shen Kuo in 1064 AD near Changzhou. He reported "a loud noise that sounded like a thunder was heard in the sky; a giant star, almost like the moon, appeared in the southeast" and later finding the crater and the still-hot meteorite within, nearby. Two of the oldest recorded meteorite falls in Europe are the Elbogen (1400) and Ensisheim (1492) meteorites. The German physicist, Ernst Florens Chladni, was the first to publish (in 1794) the idea that meteorites might be rocks that originated not from Earth, but from space. His booklet was "On the Origin of the Iron Masses Found by Pallas and Others Similar to it, and on Some Associated Natural Phenomena". In this he compiled all available data on several meteorite finds and falls concluded that they must have their origins in outer space. The scientific community of the time responded with resistance and mockery. It took nearly ten years before a general acceptance of the origin of meteorites was achieved through the work of the French scientist Jean-Baptiste Biot and the British chemist, Edward Howard. Biot's study, initiated by the French Academy of Sciences, was compelled by a fall of thousands of meteorites on 26 April 1803 from the skies of L'Aigle, France. Striking people or property Throughout history, many first- and second-hand reports speak of meteorites killing humans and other animals. One example is from 1490 AD in China, which purportedly killed thousands of people. John Lewis has compiled some of these reports, and summarizes, "No one in recorded history has ever been killed by a meteorite in the presence of a meteoriticist and a medical doctor" and "reviewers who make sweeping negative conclusions usually do not cite any of the primary publications in which the eyewitnesses describe their experiences, and give no evidence of having read them". Modern reports of meteorite strikes include: In 1954 in Sylacauga, Alabama. A stone chondrite, the Hodges meteorite or Sylacauga meteorite, crashed through a roof and injured an occupant. An approximately fragment of the Mbale meteorite fall from Uganda struck a youth, causing no injury. In October 2021 a meteorite penetrated the roof of a house in Golden, British Columbia landing on an occupant's bed. Notable examples Naming Meteorites are always named for the places they were found, where practical, usually a nearby town or geographic feature. In cases where many meteorites were found in one place, the name may be followed by a number or letter (e.g., Allan Hills 84001 or Dimmitt (b)). The name designated by the Meteoritical Society is used by scientists, catalogers, and most collectors. Terrestrial Allende – largest known carbonaceous chondrite (Chihuahua, Mexico, 1969). Allan Hills A81005 – First meteorite determined to be of lunar origin. Allan Hills 84001 – Mars meteorite that was claimed to prove the existence of life on Mars. The Bacubirito Meteorite (Meteorito de Bacubirito) – A meteorite estimated to weigh . Campo del Cielo – a group of iron meteorites associated with a crater field (of the same name) of at least 26 craters in West Chaco Province, Argentina. The total weight of meteorites recovered exceeds 100 tonnes. Canyon Diablo – Associated with Meteor Crater in Arizona. Cape York – One of the largest meteorites in the world. A 34-ton fragment called "Ahnighito", is exhibited at the American Museum of Natural History; the largest meteorite on exhibit in any museum. Gibeon – A large Iron meteorite in Namibia, created the largest known strewn field. Hoba – The largest known intact meteorite. Kaidun – An unusual carbonaceous chondrite. Mbosi meteorite – A 16-metric-ton ungrouped iron meteorite in Tanzania. Murchison – A carbonaceous chondrite found to contain nucleobases – the building block of life. Nōgata – The oldest meteorite whose fall can be dated precisely (to 19 May 861, at Nōgata) Orgueil – A famous meteorite due to its especially primitive nature and high presolar grain content. Sikhote-Alin – Massive iron meteorite impact event that occurred on 12 February 1947. Tucson Ring – Ring shaped meteorite, used by a blacksmith as an anvil, in Tucson AZ. Currently at the Smithsonian. Willamette – The largest meteorite ever found in the United States. 2007 Carancas impact event – On 15 September 2007, a stony meteorite that may have weighed as much as 4000 kilograms created a crater 13 meters in diameter near the village of Carancas, Peru. 2013 Russian meteor event – a 17-metre diameter, 10 000 ton asteroid hit the atmosphere above Chelyabinsk, Russia at 18 km/s around 09:20 local time (03:20 UTC) 15 February 2013, producing a very bright fireball in the morning sky. A number of small meteorite fragments have since been found nearby. Extraterrestrial Bench Crater meteorite (Apollo 12, 1969) and the Hadley Rille meteorite (Apollo 15, 1971) − Fragments of asteroids were found among the samples collected on the Moon. Block Island meteorite and Heat Shield Rock – Discovered on Mars by Opportunity rover among four other iron meteorites. Two nickel-iron meteorites were identified by the Spirit rover. (
Physical sciences
Geology
null
19945
https://en.wikipedia.org/wiki/Motherboard
Motherboard
A motherboard (also called mainboard, main circuit board, MB, mobo, base board, system board, or, in Apple computers, logic board) is the main printed circuit board (PCB) in general-purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard usually contains significant sub-systems, such as the central processor, the chipset's input/output and memory controllers, interface connectors, and other components integrated for general use. Motherboard is an optional expansion to the computer that processes the RAM faster . As the name suggests, this board is often referred to as the mother of all components attached to it, which often include peripherals, interface cards, and daughterboards: sound cards, video cards, network cards, host bus adapters, TV tuner cards, IEEE 1394 cards, and a variety of other custom components. Similarly, the term mainboard describes a device with a single board and no additional expansions or capability, such as controlling boards in laser printers, television sets, washing machines, mobile phones, and other embedded systems with limited expansion abilities. History Prior to the invention of the microprocessor, the CPU of a digital computer consisted of multiple circuit boards in a card-cage case with components connected by a backplane containing a set of interconnected sockets into which the circuit boards are plugged. In very old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice. The central processing unit (CPU), memory, and peripherals were housed on individually printed circuit boards, which were plugged into the backplane. In older microprocessor-based systems, the CPU and some support circuitry would fit on a single CPU board, with memory and peripherals on additional boards, all plugged into the backplane. The ubiquitous S-100 bus of the 1970s is an example of this type of backplane system. The most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment. During the late 1980s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard. In the late 1980s, personal computer motherboards began to include single ICs (also called Super I/O chips) capable of supporting a set of low-speed peripherals: PS/2 keyboard and mouse, floppy disk drive, serial ports, and parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds; those systems often had fewer embedded components. Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century (like the tablet computer and the netbook). Memory, processors, network controllers, power source, and storage would be integrated into some systems. Design A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it also contains the central processing unit and hosts other subsystems and devices. A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables; in modern microcomputers, it is increasingly common to integrate some of these peripherals into the motherboard itself. An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard. Modern motherboards include: CPU sockets (or CPU slots) in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA Nano and the Goldmont Plus, the CPU is directly soldered to the motherboard. Memory slots into which the system's main memory is to be installed, typically in the form of DIMM modules containing DRAM chips. Can be DDR3, DDR4, DDR5, or onboard LPDDRx. The chipset which forms an interface between the CPU, main memory, and peripheral buses Non-volatile memory chips (usually flash memory in modern motherboards) containing the system's firmware or BIOS The clock generator which produces the system clock signal to synchronize the various components Slots for expansion cards (the interface to the system via the buses supported by the chipset) Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards. , some graphics cards (e.g. GeForce 8 and Radeon R600) require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard disk drives, optical disc drives, or solid-state drives, typically SATA and NVMe Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as USB for mouse devices and keyboards. Early personal computers such as the Apple II and IBM PC include only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PCjr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards. Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. Form factor Motherboards are produced in a variety of sizes and shapes called form factors, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible systems are designed to fit various case sizes. , most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, which have not been built from commodity components. A case's motherboard and power supply unit (PSU) form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard. Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard. CPU sockets A CPU socket (central processing unit) or slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets on the motherboard can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture. A CPU socket type and motherboard chipset must support the CPU series and speed. Integrated peripherals With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers. The integrated peripherals may also be called onboard devices. Disk controllers for SATA drives, and historical PATA drives Historical floppy-disk controller Integrated graphics controller supporting 2D and 3D graphics, with VGA, DVI, HDMI, DisplayPort, and TV output Integrated sound card supporting 8-channel (7.1) audio and S/PDIF output Ethernet network controller for connection to a LAN and to receive Internet USB controller Wireless network interface controller Bluetooth controller Temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components. Other onboard devices, such as PMIC Peripheral card slots A typical motherboard will have a different number of connections depending on its standard and form factor. A standard, modern ATX motherboard will typically have two or three PCI-Express x16 connection for a graphics card, one or two legacy PCI slots for various expansion cards, and one or two PCI-E x1 (which has superseded PCI). A standard EATX motherboard will have two to four PCI-E x16 connection for graphics cards, and a varying number of PCI and PCI-E x1 slots. It can sometimes also have a PCI-E x4 slot (will vary between brands and models). Some motherboards have two or more PCI-E x16 slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for AMD). These allow 2 to 4 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming, video editing, etc. In newer motherboards, the M.2 slots are for SSD and/or wireless network interface controller. Temperature and reliability Motherboards are generally air cooled with heat sinks often mounted on larger chips in modern motherboards. Insufficient or improper cooling can cause damage to the internal components of the computer, or cause it to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPU's until the late 1990s; since then, most have required CPU fans mounted on heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional computer fans and integrated temperature sensors to detect motherboard and CPU temperatures and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Alternatively computers can use a water cooling system instead of many fans. Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as a careful layout of the motherboard and other components to allow for heat sink placement. A 2003 study found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation, an issue termed capacitor plague. Modern motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at , their expected design life roughly doubles for every below this. At a lifetime of 3 to 4 years can be expected. However, many manufacturers deliver substandard capacitors, which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures around the CPU socket exacerbate this problem. With top blowers, the motherboard components can be kept under , effectively doubling the motherboard lifetime. Mid-range and high-end motherboards, on the other hand, use solid capacitors exclusively. For every 10 °C less, their average lifespan is multiplied approximately by three, resulting in a 6-times higher lifetime expectancy at . These capacitors may be rated for 5000, 10000 or 12000 hours of operation at , extending the projected lifetime in comparison with standard solid capacitors. In desktop PCs and notebook computers, the motherboard cooling and monitoring solutions are usually based on a super I/O chip or an embedded controller. Bootstrapping Motherboards contain a ROM (and later EPROM, EEPROM, NOR flash) that stores firmware that initializes hardware devices and boots an operating system from a peripheral device. The terms bootstrapping and boot come from the phrase "lifting yourself by your bootstraps". Microcomputers such as the Apple II and IBM PC used ROM chips mounted in sockets on the motherboard. At power-up, the central processor unit would load its program counter with the address of the Boot ROM and start executing instructions from the Boot ROM. These instructions initialized and tested the system hardware, displayed system information on the screen, performed RAM checks, and then attempts to boot an operating system from a peripheral device. If no peripheral device containing an operating system was available, then the computer would perform tasks from other ROM stores or display an error message, depending on the model and design of the computer. For example, both the Apple II and the original IBM PC had Cassette BASIC (ROM BASIC) and would start that if no operating system could be loaded from the floppy disk or hard disk. The boot firmware in modern IBM PC compatible motherboard designs contains either a BIOS, as did the boot ROM on the original IBM PC, or UEFI. UEFI is a successor to BIOS that became popular after Microsoft began requiring it for a system to be certified to run Windows 8. When the computer is powered on, the boot firmware tests and configures memory, circuitry, and peripherals. This Power-On Self Test (POST) may include testing some of the following things: Video card Expansion cards inserted into slots, such as conventional PCI and PCI Express Historical floppy drive Temperatures, voltages, and fan speeds for hardware monitoring CMOS memory used to store BIOS configuration Keyboard and mouse Sound card Network adapter Optical drives: CD-ROM or DVD-ROM Hard disk drive and solid-state drive Security devices, such as a fingerprint reader USB devices, such as a USB mass storage device
Technology
Computer hardware
null
19957
https://en.wikipedia.org/wiki/Maser
Maser
A maser is a device that produces coherent electromagnetic waves (microwaves), through amplification by stimulated emission. The term is an acronym for microwave amplification by stimulated emission of radiation. Nikolay Basov, Alexander Prokhorov and Joseph Weber introduced the concept of the maser in 1952, and Charles H. Townes, James P. Gordon, and Herbert J. Zeiger built the first maser at Columbia University in 1953. Townes, Basov and Prokhorov won the 1964 Nobel Prize in Physics for theoretical work leading to the maser. Masers are used as timekeeping devices in atomic clocks, and as extremely low-noise microwave amplifiers in radio telescopes and deep-space spacecraft communication ground-stations. Modern masers can be designed to generate electromagnetic waves at microwave frequencies and radio and infrared frequencies. For this reason, Townes suggested replacing "microwave" with "molecular" as the first word in the acronym "maser". The laser works by the same principle as the maser, but produces higher-frequency coherent radiation at visible wavelengths. The maser was the precursor to the laser, inspiring theoretical work by Townes and Arthur Leonard Schawlow that led to the invention of the laser in 1960 by Theodore Maiman. When the coherent optical oscillator was first imagined in 1957, it was originally called the "optical maser". This was ultimately changed to , for "light amplification by stimulated emission of radiation". Gordon Gould is credited with creating this acronym in 1957. History The theoretical principles governing the operation of a maser were first described by Joseph Weber of the University of Maryland, College Park at the Electron Tube Research Conference in June 1952 in Ottawa, with a summary published in the June 1953 Transactions of the Institute of Radio Engineers Professional Group on Electron Devices, and simultaneously by Nikolay Basov and Alexander Prokhorov from Lebedev Institute of Physics, at an All-Union Conference on Radio-Spectroscopy held by the USSR Academy of Sciences in May 1952, published in October 1954. Independently, Charles Hard Townes, James P. Gordon, and H. J. Zeiger built the first ammonia maser at Columbia University in 1953. This device used stimulated emission in a stream of energized ammonia molecules to produce amplification of microwaves at a frequency of about 24.0 gigahertz. Townes later worked with Arthur L. Schawlow to describe the principle of the optical maser, or laser, of which Theodore H. Maiman created the first working model in 1960. For their research in the field of stimulated emission, Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964. Technology The maser is based on the principle of stimulated emission proposed by Albert Einstein in 1917. When atoms have been induced into an excited energy state, they can amplify radiation at a frequency particular to the element or molecule used as the masing medium (similar to what occurs in the lasing medium in a laser). By putting such an amplifying medium in a resonant cavity, feedback is created that can produce coherent radiation. Some common types Atomic beam masers Ammonia maser Free electron maser Hydrogen maser Gas masers Rubidium maser Liquid-dye and chemical laser Solid state masers Ruby maser Whispering-gallery modes iron-sapphire maser Dual noble gas maser (The dual noble gas of a masing medium which is nonpolar.) 21st-century developments In 2012, a research team from the National Physical Laboratory and Imperial College London developed a solid-state maser that operated at room temperature by using optically pumped, pentacene-doped p-Terphenyl as the amplifier medium. It produced pulses of maser emission lasting for a few hundred microseconds. In 2018, a research team from Imperial College London and University College London demonstrated continuous-wave maser oscillation using synthetic diamonds containing nitrogen-vacancy defects. Uses Masers serve as high precision frequency references. These "atomic frequency standards" are one of the many forms of atomic clocks. Masers were also used as low-noise microwave amplifiers in radio telescopes, though these have largely been replaced by amplifiers based on FETs. During the early 1960s, the Jet Propulsion Laboratory developed a maser to provide ultra-low-noise amplification of S-band microwave signals received from deep space probes. This maser used deeply refrigerated helium to chill the amplifier down to a temperature of 4 kelvin. Amplification was achieved by exciting a ruby comb with a 12.0 gigahertz klystron. In the early years, it took days to chill and remove the impurities from the hydrogen lines. Refrigeration was a two-stage process, with a large Linde unit on the ground, and a crosshead compressor within the antenna. The final injection was at through a micrometer-adjustable entry to the chamber. The whole system noise temperature looking at cold sky (2.7 kelvin in the microwave band) was 17 kelvin. This gave such a low noise figure that the Mariner IV space probe could send still pictures from Mars back to the Earth, even though the output power of its radio transmitter was only 15 watts, and hence the total signal power received was only −169 decibels with respect to a milliwatt (dBm). Hydrogen maser The hydrogen maser is used as an atomic frequency standard. Together with other kinds of atomic clocks, these help make up the International Atomic Time standard ("Temps Atomique International" or "TAI" in French). This is the international time scale coordinated by the International Bureau of Weights and Measures. Norman Ramsey and his colleagues first conceived of the maser as a timing standard. More recent masers are practically identical to their original design. Maser oscillations rely on the stimulated emission between two hyperfine energy levels of atomic hydrogen. Here is a brief description of how they work: First, a beam of atomic hydrogen is produced. This is done by submitting the gas at low pressure to a high-frequency radio wave discharge (see the picture on this page). The next step is "state selection"—in order to get some stimulated emission, it is necessary to create a population inversion of the atoms. This is done in a way that is very similar to the Stern–Gerlach experiment. After passing through an aperture and a magnetic field, many of the atoms in the beam are left in the upper energy level of the lasing transition. From this state, the atoms can decay to the lower state and emit some microwave radiation. A high Q factor (quality factor) microwave cavity confines the microwaves and reinjects them repeatedly into the atom beam. The stimulated emission amplifies the microwaves on each pass through the beam. This combination of amplification and feedback is what defines all oscillators. The resonant frequency of the microwave cavity is tuned to the frequency of the hyperfine energy transition of hydrogen: 1,420,405,752 hertz. A small fraction of the signal in the microwave cavity is coupled into a coaxial cable and then sent to a coherent radio receiver. The microwave signal coming out of the maser is very weak, a few picowatts. The frequency of the signal is fixed and extremely stable. The coherent receiver is used to amplify the signal and change the frequency. This is done using a series of phase-locked loops and a high performance quartz oscillator. Astrophysical masers Maser-like stimulated emission has also been observed in nature from interstellar space, and it is frequently called "superradiant emission" to distinguish it from laboratory masers. Such emission is observed from molecules such as water (H2O), hydroxyl radicals (•OH), methanol (CH3OH), formaldehyde (HCHO), silicon monoxide (SiO), and carbodiimide (HNCNH). Water molecules in star-forming regions can undergo a population inversion and emit radiation at about 22.0 GHz, creating the brightest spectral line in the radio universe. Some water masers also emit radiation from a rotational transition at a frequency of 96 GHz. Extremely powerful masers, associated with active galactic nuclei, are known as megamasers and are up to a million times more powerful than stellar masers. Terminology The meaning of the term maser has changed slightly since its introduction. Initially the acronym was universally given as "microwave amplification by stimulated emission of radiation", which described devices which emitted in the microwave region of the electromagnetic spectrum. The principle and concept of stimulated emission has since been extended to more devices and frequencies. Thus, the original acronym is sometimes modified, as suggested by Charles H. Townes, to "molecular amplification by stimulated emission of radiation." Some have asserted that Townes's efforts to extend the acronym in this way were primarily motivated by the desire to increase the importance of his invention, and his reputation in the scientific community. When the laser was developed, Townes and Schawlow and their colleagues at Bell Labs pushed the use of the term optical maser, but this was largely abandoned in favor of laser, coined by their rival Gordon Gould. In modern usage, devices that emit in the X-ray through infrared portions of the spectrum are typically called lasers, and devices that emit in the microwave region and below are commonly called masers, regardless of whether they emit microwaves or other frequencies. Gould originally proposed distinct names for devices that emit in each portion of the spectrum, including grasers (gamma ray lasers), xasers (x-ray lasers), uvasers (ultraviolet lasers), lasers (visible lasers), irasers (infrared lasers), masers (microwave masers), and rasers (RF masers). Most of these terms never caught on, however, and all have now become (apart from in science fiction) obsolete except for maser and laser.
Technology
Lasers
null
19980
https://en.wikipedia.org/wiki/Machine%20translation
Machine translation
Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. Early approaches were mostly rule-based or statistical. These methods have since been superseded by neural machine translation and large language models. History Origins The origins of machine translation can be traced back to the work of Al-Kindi, a ninth-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine translation. The idea of machine translation later appeared in the 17th century. In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol. The idea of using digital computers for translation of natural languages was proposed as early as 1947 by England's A. D. Booth and Warren Weaver at Rockefeller Foundation in the same year. "The memorandum written by Warren Weaver in 1949 is perhaps the single most influential publication in the earliest days of machine translation." Others followed. A demonstration was made in 1954 on the APEXC machine at Birkbeck College (University of London) of a rudimentary translation of English into French. Several papers on the topic were published at the time, and even articles in popular journals (for example an article by Cleave and Zacharov in the September 1955 issue of Wireless World). A similar application, also pioneered at Birkbeck College at the time, was reading and composing Braille texts by computer. 1950s The first researcher in the field, Yehoshua Bar-Hillel, began his research at MIT (1951). A Georgetown University MT research team, led by Professor Michael Zarechnak, followed (1951) with a public demonstration of its Georgetown-IBM experiment system in 1954. MT research programs popped up in Japan and Russia (1955), and the first MT conference was held in London (1956). David G. Hays "wrote about computer-assisted language processing as early as 1957" and "was project leader on computational linguistics at Rand from 1955 to 1968." 1960–1975 Researchers continued to join the field as the Association for Machine Translation and Computational Linguistics was formed in the U.S. (1962) and the National Academy of Sciences formed the Automatic Language Processing Advisory Committee (ALPAC) to study MT (1964). Real progress was much slower, however, and after the ALPAC report (1966), which found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced. According to a 1972 report by the Director of Defense Research and Engineering (DDR&E), the feasibility of large-scale MT was reestablished by the success of the Logos MT system in translating military manuals into Vietnamese during that conflict. The French Textile Institute also used MT to translate abstracts from and into French, English, German and Spanish (1970); Brigham Young University started a project to translate Mormon texts by automated translation (1971). 1975 and beyond SYSTRAN, which "pioneered the field under contracts from the U.S. government" in the 1960s, was used by Xerox to translate technical manuals (1978). Beginning in the late 1980s, as computational power increased and became less expensive, more interest was shown in statistical models for machine translation. MT became more popular after the advent of computers. SYSTRAN's first implementation system was implemented in 1988 by the online service of the French Postal Service called Minitel. Various computer based translation companies were also launched, including Trados (1984), which was the first to develop and market Translation Memory technology (1989), though this is not the same as MT. The first commercial MT system for Russian / English / German-Ukrainian was developed at Kharkov State University (1991). By 1998, "for as little as $29.95" one could "buy a program for translating in one direction between English and a major European language of your choice" to run on a PC. MT on the web started with SYSTRAN offering free translation of small texts (1996) and then providing this via AltaVista Babelfish, which racked up 500,000 requests a day (1997). The second free translation service on the web was Lernout & Hauspie's GlobaLink. Atlantic Magazine wrote in 1998 that "Systran's Babelfish and GlobaLink's Comprende" handled "Don't bank on it" with a "competent performance." Franz Josef Och (the future head of Translation Development AT Google) won DARPA's speed MT competition (2003). More innovations during this time included MOSES, the open-source statistical MT engine (2007), a text/SMS translation service for mobiles in Japan (2008), and a mobile phone with built-in speech-to-speech translation functionality for English, Japanese and Chinese (2009). In 2012, Google announced that Google Translate translates roughly enough text to fill 1 million books in one day. Approaches Before the advent of deep learning methods, statistical methods required a lot of rules accompanied by morphological, syntactic, and semantic annotations. Rule-based The rule-based machine translation approach was used mostly in the creation of dictionaries and grammar programs. Its biggest downfall was that everything had to be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Transfer-based machine translation Transfer-based machine translation was similar to interlingual machine translation in that it created a translation from an intermediate representation that simulated the meaning of the original sentence. Unlike interlingual MT, it depended partially on the language pair involved in the translation. Interlingual Interlingual machine translation was one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, was transformed into an interlingual language, i.e. a "language neutral" representation that is independent of any language. The target language was then generated out of the interlingua. The only interlingual machine translation system that was made operational at the commercial level was the KANT system (Nyberg and Mitamura, 1992), which was designed to translate Caterpillar Technical English (CTE) into other languages. Dictionary-based Machine translation used a method based on dictionary entries, which means that the words were translated as they are by a dictionary. Statistical Statistical machine translation tried to generate translations using statistical methods based on bilingual text corpora, such as the Canadian Hansard corpus, the English-French record of the Canadian parliament and EUROPARL, the record of the European Parliament. Where such corpora were available, good results were achieved translating similar texts, but such corpora were rare for many language pairs. The first statistical machine translation software was CANDIDE from IBM. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved. SMT's biggest downfall included it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors. Some work has been done in the utilization of multiparallel corpora, that is a body of text that has been translated into 3 or more languages. Using these methods, a text that has been translated into 2 or more languages may be utilized in combination to provide a more accurate translation into a third language compared with if just one of those source languages were used alone. Neural MT A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years. However, the current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test benchmarks i.e., it lacks statistical significance power. Translations by neural MT tools like DeepL Translator, which is thought to usually deliver the best machine translation results as of 2022, typically still need post-editing by a human. Instead of training specialized translation models on parallel datasets, one can also directly prompt generative large language models like GPT to translate a text. This approach is considered promising, but is still more resource-intensive than specialized translation models. Issues Studies using human evaluation (e.g. by professional literary translators or human readers) have systematically identified various issues with the latest advanced MT outputs. Common issues include the translation of ambiguous parts whose correct translation requires common sense-like semantic language processing or context. There can also be errors in the source texts, missing high-quality training data and the severity of frequency of several types of problems may not get reduced with techniques used to date, requiring some level of human active participation. Disambiguation Word-sense disambiguation concerns finding a suitable translation when a word can have more than one meaning. The problem was first raised in the 1950s by Yehoshua Bar-Hillel. He pointed out that without a "universal encyclopedia", a machine would never be able to distinguish between the two meanings of a word. Today there are numerous approaches designed to overcome this problem. They can be approximately divided into "shallow" approaches and "deep" approaches. Shallow approaches assume no knowledge of the text. They simply apply statistical methods to the words surrounding the ambiguous word. Deep approaches presume a comprehensive knowledge of the word. So far, shallow approaches have been more successful. Claude Piron, a long-time translator for the United Nations and the World Health Organization, wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved: The ideal deep approach would require the translation software to do all the research necessary for this kind of disambiguation on its own; but this would require a higher degree of AI than has yet been attained. A shallow approach which simply guessed at the sense of the ambiguous English phrase that Piron mentions (based, perhaps, on which kind of prisoner-of-war camp is more often mentioned in a given corpus) would have a reasonable chance of guessing wrong fairly often. A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human. Non-standard speech One of the major pitfalls of MT is its inability to translate non-standard language with the same accuracy as standard language. Heuristic or statistical based MT takes input from various sources in standard form of a language. Rule-based translation, by nature, does not include common non-standard usages. This causes errors in translation from a vernacular source or into colloquial language. Limitations on translation from casual speech present issues in the use of machine translation in mobile devices. Named entities In information extraction, named entities, in a narrow sense, refer to concrete or abstract entities in the real world such as people, organizations, companies, and places that have a proper name: George Washington, Chicago, Microsoft. It also refers to expressions of time, space and quantity such as 1 July 2011, $500. In the sentence "Smith is the president of Fabrionix" both Smith and Fabrionix are named entities, and can be further qualified via first name or other information; "president" is not, since Smith could have earlier held another position at Fabrionix, e.g. Vice President. The term rigid designator is what defines these usages for analysis in statistical machine translation. Named entities must first be identified in the text; if not, they may be erroneously translated as common nouns, which would most likely not affect the BLEU rating of the translation but would change the text's human readability. They may be omitted from the output translation, which would also have implications for the text's readability and message. Transliteration includes finding the letters in the target language that most closely correspond to the name in the source language. This, however, has been cited as sometimes worsening the quality of translation. For "Southern California" the first word should be translated directly, while the second word should be transliterated. Machines often transliterate both because they treated them as one entity. Words like these are hard for machine translators, even those with a transliteration component, to process. Use of a "do-not-translate" list, which has the same end goal – transliteration as opposed to translation. still relies on correct identification of named entities. A third approach is a class-based model. Named entities are replaced with a token to represent their "class"; "Ted" and "Erica" would both be replaced with "person" class token. Then the statistical distribution and use of person names, in general, can be analyzed instead of looking at the distributions of "Ted" and "Erica" individually, so that the probability of a given name in a specific language will not affect the assigned probability of a translation. A study by Stanford on improving this area of translation gives the examples that different probabilities will be assigned to "David is going for a walk" and "Ankit is going for a walk" for English as a target language due to the different number of occurrences for each name in the training data. A frustrating outcome of the same study by Stanford (and other attempts to improve named recognition translation) is that many times, a decrease in the BLEU scores for translation will result from the inclusion of methods for named entity translation. Applications While no system provides the ideal of fully automatic high-quality machine translation of unrestricted text, many fully automated systems produce reasonable output. The quality of machine translation is substantially improved if the domain is restricted and controlled. This enables using machine translation as a tool to speed up and simplify translations, as well as producing flawed but useful low-cost or ad-hoc translations. Travel Machine translation applications have also been released for most mobile devices, including mobile telephones, pocket PCs, PDAs, etc. Due to their portability, such instruments have come to be designated as mobile translation tools enabling mobile business networking between partners speaking different languages, or facilitating both foreign language learning and unaccompanied traveling to foreign countries without the need of the intermediation of a human translator. For example, the Google Translate app allows foreigners to quickly translate text in their surrounding via augmented reality using the smartphone camera that overlays the translated text onto the text. It can also recognize speech and then translate it. Public administration Despite their inherent limitations, MT programs are used around the world. Probably the largest institutional user is the European Commission. In 2012, with an aim to replace a rule-based MT by newer, statistical-based MT@EC, The European Commission contributed 3.072 million euros (via its ISA programme). Wikipedia Machine translation has also been used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future, especially as the MT capabilities may improve. There is a "content translation tool" which allows editors to more easily translate articles across several select languages. English-language articles are thought to usually be more comprehensive and less biased than their non-translated equivalents in other languages. As of 2022, English Wikipedia has over 6.5 million articles while the German and Swedish Wikipedias each only have over 2.5 million articles, each often far less comprehensive. Surveillance and military Following terrorist attacks in Western countries, including 9-11, the U.S. and its allies have been most interested in developing Arabic machine translation programs, but also in translating Pashto and Dari languages. Within these languages, the focus is on key phrases and quick communication between military members and civilians through the use of mobile phone apps. The Information Processing Technology Office in DARPA hosted programs like TIDES and Babylon translator. US Air Force has awarded a $1 million contract to develop a language translation technology. Social media The notable rise of social networking on the web in recent years has created yet another niche for the application of machine translation software – in utilities such as Facebook, or instant messaging clients such as Skype, Google Talk, MSN Messenger, etc. – allowing users speaking different languages to communicate with each other. Online games Lineage W gained popularity in Japan because of its machine translation features allowing players from different countries to communicate. Medicine Despite being labelled as an unworthy competitor to human translation in 1966 by the Automated Language Processing Advisory Committee put together by the United States government, the quality of machine translation has now been improved to such levels that its application in online collaboration and in the medical field are being investigated. The application of this technology in medical settings where human translators are absent is another topic of research, but difficulties arise due to the importance of accurate translations in medical diagnoses. Researchers caution that the use of machine translation in medicine could risk mistranslations that can be dangerous in critical situations. Machine translation can make it easier for doctors to communicate with their patients in day to day activities, but it is recommended to only use machine translation when there is no other alternative, and that translated medical texts should be reviewed by human translators for accuracy. Law Legal language poses a significant challenge to machine translation tools due to its precise nature and atypical use of normal words. For this reason, specialized algorithms have been developed for use in legal contexts. Due to the risk of mistranslations arising from machine translators, researchers recommend that machine translations should be reviewed by human translators for accuracy, and some courts prohibit its use in formal proceedings. The use of machine translation in law has raised concerns about translation errors and client confidentiality. Lawyers who use free translation tools such as Google Translate may accidentally violate client confidentiality by exposing private information to the providers of the translation tools. In addition, there have been arguments that consent for a police search that is obtained with machine translation is invalid, with different courts issuing different verdicts over whether or not these arguments are valid. Ancient languages The advancements in convolutional neural networks in recent years and in low resource machine translation (when only a very limited amount of data and examples are available for training) enabled machine translation for ancient languages, such as Akkadian and its dialects Babylonian and Assyrian. Evaluation There are many factors that affect how machine translation systems are evaluated. These factors include the intended use of the translation, the nature of the machine translation software, and the nature of the translation process. Different programs may work well for different purposes. For example, statistical machine translation (SMT) typically outperforms example-based machine translation (EBMT), but researchers found that when evaluating English to French translation, EBMT performs better. The same concept applies for technical documents, which can be more easily translated by SMT because of their formal language. In certain applications, however, e.g., product descriptions written in a controlled language, a dictionary-based machine-translation system has produced satisfactory translations that require no human intervention save for quality inspection. There are various means for evaluating the output quality of machine translation systems. The oldest is the use of human judges to assess a translation's quality. Even though human evaluation is time-consuming, it is still the most reliable method to compare different systems such as rule-based and statistical systems. Automated means of evaluation include BLEU, NIST, METEOR, and LEPOR. Relying exclusively on unedited machine translation ignores the fact that communication in human language is context-embedded and that it takes a person to comprehend the context of the original text with a reasonable degree of probability. It is certainly true that even purely human-generated translations are prone to error. Therefore, to ensure that a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved, such translations must be reviewed and edited by a human. The late Claude Piron wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved. Such research is a necessary prelude to the pre-editing necessary in order to provide input for machine-translation software such that the output will not be meaningless. In addition to disambiguation problems, decreased accuracy can occur due to varying levels of training data for machine translating programs. Both example-based and statistical machine translation rely on a vast array of real example sentences as a base for translation, and when too many or too few sentences are analyzed accuracy is jeopardized. Researchers found that when a program is trained on 203,529 sentence pairings, accuracy actually decreases. The optimal level of training data seems to be just over 100,000 sentences, possibly because as training data increases, the number of possible sentences increases, making it harder to find an exact translation match. Flaws in machine translation have been noted for their entertainment value. Two videos uploaded to YouTube in April 2017 involve two Japanese hiragana characters えぐ (e and gu) being repeatedly pasted into Google Translate, with the resulting translations quickly degrading into nonsensical phrases such as "DECEARING EGG" and "Deep-sea squeeze trees", which are then read in increasingly absurd voices; the full-length version of the video currently has 6.9 million views Machine translation and signed languages In the early 2000s, options for machine translation between spoken and signed languages were severely limited. It was a common belief that deaf individuals could use traditional translators. However, stress, intonation, pitch, and timing are conveyed much differently in spoken languages compared to signed languages. Therefore, a deaf individual may misinterpret or become confused about the meaning of written text that is based on a spoken language. Researchers Zhao, et al. (2000), developed a prototype called TEAM (translation from English to ASL by machine) that completed English to American Sign Language (ASL) translations. The program would first analyze the syntactic, grammatical, and morphological aspects of the English text. Following this step, the program accessed a sign synthesizer, which acted as a dictionary for ASL. This synthesizer housed the process one must follow to complete ASL signs, as well as the meanings of these signs. Once the entire text is analyzed and the signs necessary to complete the translation are located in the synthesizer, a computer generated human appeared and would use ASL to sign the English text to the user. Copyright Only works that are original are subject to copyright protection, so some scholars claim that machine translation results are not entitled to copyright protection because MT does not involve creativity. The copyright at issue is for a derivative work; the author of the original work in the original language does not lose his rights when a work is translated: a translator must have permission to publish a translation.
Technology
Artificial intelligence concepts
null
20018
https://en.wikipedia.org/wiki/Metric%20space
Metric space
In mathematics, a metric space is a set together with a notion of distance between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general setting for studying many of the concepts of mathematical analysis and geometry. The most familiar example of a metric space is 3-dimensional Euclidean space with its usual notion of distance. Other well-known examples are a sphere equipped with the angular distance and the hyperbolic plane. A metric may correspond to a metaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with the Hamming distance, which measures the number of characters that need to be changed to get from one string to another. Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, including Riemannian manifolds, normed vector spaces, and graphs. In abstract algebra, the p-adic numbers arise as elements of the completion of a metric structure on the rational numbers. Metric spaces are also studied in their own right in metric geometry and analysis on metric spaces. Many of the basic notions of mathematical analysis, including balls, completeness, as well as uniform, Lipschitz, and Hölder continuity, can be defined in the setting of metric spaces. Other notions, such as continuity, compactness, and open and closed sets, can be defined for metric spaces, but also in the even more general setting of topological spaces. Definition and illustration Motivation To see the utility of different notions of distance, consider the surface of the Earth as a set of points. We can measure the distance between two such points by the length of the shortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural in seismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points. The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts. Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as with Wasserstein metrics on spaces of measures) or the degree of difference between two objects (for example, the Hamming distance between two strings of characters, or the Gromov–Hausdorff distance between metric spaces themselves). Definition Formally, a metric space is an ordered pair where is a set and is a metric on , i.e., a functionsatisfying the following axioms for all points : The distance from a point to itself is zero: (Positivity) The distance between two distinct points is always positive: (Symmetry) The distance from to is always the same as the distance from to : The triangle inequality holds: This is a natural property of both physical and metaphorical notions of distance: you can arrive at from by taking a detour through , but this will not make your journey any shorter than the direct path. If the metric is unambiguous, one often refers by abuse of notation to "the metric space ". By taking all axioms except the second, one can show that distance is always non-negative:Therefore the second axiom can be weakened to and combined with the first to make . Simple examples The real numbers The real numbers with the distance function given by the absolute difference form a metric space. Many properties of metric spaces and functions between them are generalizations of concepts in real analysis and coincide with those concepts when applied to the real line. Metrics on Euclidean spaces The Euclidean plane can be equipped with many different metrics. The Euclidean distance familiar from school mathematics can be defined by The taxicab or Manhattan distance is defined by and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article. The maximum, , or Chebyshev distance is defined by This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves a king would have to make on a chess board to travel from one point to another on the given space. In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formula which holds for every pair of points . A radically different distance can be defined by setting Using Iverson brackets, In this discrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points. All of these metrics make sense on as well as . Subspaces Given a metric space and a subset , we can consider to be a metric space by measuring distances the same way we would in . Formally, the induced metric on is a function defined by For example, if we take the two-dimensional sphere as a subset of , the Euclidean metric on induces the straight-line metric on described above. Two more useful examples are the open interval and the closed interval thought of as subspaces of the real line. History Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. His distance was given by logarithm of a cross ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models for elliptic geometry and hyperbolic geometry, and Felix Klein, in several publications, established the field of non-euclidean geometry through the use of the Cayley-Klein metric. The idea of an abstract space with metric properties was addressed in 1906 by René Maurice Fréchet and the term metric space was coined by Felix Hausdorff in 1914. Fréchet's work laid the foundation for understanding convergence, continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff and Stefan Banach further refined and expanded the framework of metric spaces. Hausdorff introduced topological spaces as a generalization of metric spaces. Banach's work in functional analysis heavily relied on the metric structure. Over time, metric spaces became a central part of modern mathematics. They have influenced various fields including topology, geometry, and applied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts. Basic notions A distance function is enough to define notions of closeness and convergence that were first developed in real analysis. Properties that depend on the structure of a metric space are referred to as metric properties. Every metric space is also a topological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are really topological properties. The topology of a metric space For any point in a metric space and any real number , the open ball of radius around is defined to be the set of points that are strictly less than distance from : This is a natural way to define a set of points that are relatively close to . Therefore, a set is a neighborhood of (informally, it contains all points "close enough" to ) if it contains an open ball of radius around for some . An open set is a set which is a neighborhood of all its points. It follows that the open balls form a base for a topology on . In other words, the open sets of are exactly the unions of open balls. As in any topology, closed sets are the complements of open sets. Sets may be both open and closed as well as neither open nor closed. This topology does not carry all the information about the metric space. For example, the distances , , and defined above all induce the same topology on , although they behave differently in many respects. Similarly, with the Euclidean metric and its subspace the interval with the induced metric are homeomorphic but have very different metric properties. Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are called metrizable and are particularly well-behaved in many ways: in particular, they are paracompact Hausdorff spaces (hence normal) and first-countable. The Nagata–Smirnov metrization theorem gives a characterization of metrizability in terms of other topological properties, without reference to metrics. Convergence Convergence of sequences in Euclidean space is defined as follows: A sequence converges to a point if for every there is an integer such that for all , . Convergence of sequences in a topological space is defined as follows: A sequence converges to a point if for every open set containing there is an integer such that for all , . In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern for topological properties of metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis. Completeness Informally, a metric space is complete if it has no "missing points": every sequence that looks like it should converge to something actually converges. To make this precise: a sequence in a metric space is Cauchy if for every there is an integer such that for all , . By the triangle inequality, any convergent sequence is Cauchy: if and are both less than away from the limit, then they are less than away from each other. If the converse is true—every Cauchy sequence in converges—then is complete. Euclidean spaces are complete, as is with the other metrics described above. Two examples of spaces which are not complete are and the rationals, each with the metric induced from . One can think of as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it in (for example, its successive decimal approximations). These examples show that completeness is not a topological property, since is complete but the homeomorphic space is not. This notion of "missing points" can be made precise. In fact, every metric space has a unique completion, which is a complete space that contains the given space as a dense subset. For example, is the completion of , and the real numbers are the completion of the rationals. Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, the p-adic numbers are defined as the completion of the rationals under a different metric. Completion is particularly common as a tool in functional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example, weak solutions to differential equations typically live in a completion (a Sobolev space) rather than the original space of nice functions for which the differential equation actually makes sense. Bounded and totally bounded spaces A metric space is bounded if there is an such that no pair of points in is more than distance apart. The least such is called the diameter of . The space is called precompact or totally bounded if for every there is a finite cover of by open balls of radius . Every totally bounded space is bounded. To see this, start with a finite cover by -balls for some arbitrary . Since the subset of consisting of the centers of these balls is finite, it has finite diameter, say . By the triangle inequality, the diameter of the whole space is at most . The converse does not hold: an example of a metric space that is bounded but not totally bounded is (or any other infinite set) with the discrete metric. Compactness Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces: A metric space is compact if every open cover has a finite subcover (the usual topological definition). A metric space is compact if every sequence has a convergent subsequence. (For general topological spaces this is called sequential compactness and is not equivalent to compactness.) A metric space is compact if it is complete and totally bounded. (This definition is written in terms of metric properties and does not make sense for a general topological space, but it is nevertheless topologically invariant since it is equivalent to compactness.) One example of a compact space is the closed interval . Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool is Lebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover. Functions between metric spaces Unlike in the case of topological spaces or algebraic structures such as groups or rings, there is no single "right" type of structure-preserving function between metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that and are two metric spaces. The words "function" and "map" are used interchangeably. Isometries One interpretation of a "structure-preserving" map is one that fully preserves the distance function: A function is distance-preserving if for every pair of points and in , It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called an isometry. One perhaps non-obvious example of an isometry between spaces described in this article is the map defined by If there is an isometry between the spaces and , they are said to be isometric. Metric spaces that are isometric are essentially identical. Continuous maps On the other end of the spectrum, one can forget entirely about the metric structure and study continuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are: Topological definition. A function is continuous if for every open set in , the preimage is open. Sequential continuity. A function is continuous if whenever a sequence converges to a point in , the sequence converges to the point in . (These first two definitions are not equivalent for all topological spaces.) ε–δ definition. A function is continuous if for every point in and every there exists such that for all in we have A homeomorphism is a continuous bijection whose inverse is also continuous; if there is a homeomorphism between and , they are said to be homeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example, is unbounded and complete, while is bounded but not complete. Uniformly continuous maps A function is uniformly continuous if for every real number there exists such that for all points and in such that , we have The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the point . However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences in to Cauchy sequences in . In other words, uniform continuity preserves some metric properties which are not purely topological. On the other hand, the Heine–Cantor theorem states that if is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces. Lipschitz maps and contractions A Lipschitz map is one that stretches distances by at most a bounded factor. Formally, given a real number , the map is -Lipschitz if Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric. For example, a curve in a metric space is rectifiable (has finite length) if and only if it has a Lipschitz reparametrization. A 1-Lipschitz map is sometimes called a nonexpanding or metric map. Metric maps are commonly taken to be the morphisms of the category of metric spaces. A -Lipschitz map for is called a contraction. The Banach fixed-point theorem states that if is a complete metric space, then every contraction admits a unique fixed point. If the metric space is compact, the result holds for a slightly weaker condition on : a map admits a unique fixed point if Quasi-isometries A quasi-isometry is a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example, and its subspace are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important in geometric group theory: the Švarc–Milnor lemma states that all spaces on which a group acts geometrically are quasi-isometric. Formally, the map is a quasi-isometric embedding if there exist constants and such that It is a quasi-isometry if in addition it is quasi-surjective, i.e. there is a constant such that every point in is at distance at most from some point in the image . Notions of metric space equivalence Given two metric spaces and : They are called homeomorphic (topologically isomorphic) if there is a homeomorphism between them (i.e., a continuous bijection with a continuous inverse). If and the identity map is a homeomorphism, then and are said to be topologically equivalent. They are called uniformic (uniformly isomorphic) if there is a uniform isomorphism between them (i.e., a uniformly continuous bijection with a uniformly continuous inverse). They are called bilipschitz homeomorphic if there is a bilipschitz bijection between them (i.e., a Lipschitz bijection with a Lipschitz inverse). They are called isometric if there is a (bijective) isometry between them. In this case, the two metric spaces are essentially identical. They are called quasi-isometric if there is a quasi-isometry between them. Metric spaces with additional structure Normed vector spaces A normed vector space is a vector space equipped with a norm, which is a function that measures the length of vectors. The norm of a vector is typically denoted by . Any normed vector space can be equipped with a metric in which the distance between two vectors and is given by The metric is said to be induced by the norm . Conversely, if a metric on a vector space is translation invariant: for every , , and in ; and : for every and in and real number ; then it is the metric induced by the norm A similar relationship holds between seminorms and pseudometrics. Among examples of metrics induced by a norm are the metrics , , and on , which are induced by the Manhattan norm, the Euclidean norm, and the maximum norm, respectively. More generally, the Kuratowski embedding allows one to see any metric space as a subspace of a normed vector space. Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied in functional analysis. Completeness is particularly important in this context: a complete normed vector space is known as a Banach space. An unusual property of normed vector spaces is that linear transformations between them are continuous if and only if they are Lipschitz. Such transformations are known as bounded operators. Length spaces A curve in a metric space is a continuous function . The length of is measured by In general, this supremum may be infinite; a curve of finite length is called rectifiable. Suppose that the length of the curve is equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length, becomes a geodesic: a curve which is a distance-preserving function. A geodesic is a shortest possible path between any two of its points. A geodesic metric space is a metric space which admits a geodesic between any two of its points. The spaces and are both geodesic metric spaces. In , geodesics are unique, but in , there are often infinitely many geodesics between two points, as shown in the figure at the top of the article. The space is a length space (or the metric is intrinsic) if the distance between any two points and is the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points and can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface. Given any metric space , one can define a new, intrinsic distance function on by setting the distance between points and to be the infimum of the -lengths of paths between them. For instance, if is the straight-line distance on the sphere, then is the great-circle distance. However, in some cases may have infinite values. For example, if is the Koch snowflake with the subspace metric induced from , then the resulting intrinsic distance is infinite for any pair of distinct points. Riemannian manifolds A Riemannian manifold is a space equipped with a Riemannian metric tensor, which determines lengths of tangent vectors at every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable path in a Riemannian manifold has length defined as the integral of the length of the tangent vector to the path: On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such as sub-Riemannian and Finsler metrics. The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is a space (a synthetic condition which depends purely on the metric) if and only if its sectional curvature is bounded above by . Thus spaces generalize upper curvature bounds to general metric spaces. Metric measure spaces Real analysis makes use of both the metric on and the Lebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside in metric measure spaces: spaces that have both a measure and a metric which are compatible with each other. Formally, a metric measure space is a metric space equipped with a Borel regular measure such that every ball has positive measure. For example Euclidean spaces of dimension , and more generally -dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with the Lebesgue measure. Certain fractal metric spaces such as the Sierpiński gasket can be equipped with the α-dimensional Hausdorff measure where α is the Hausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure. One application of metric measure spaces is generalizing the notion of Ricci curvature beyond Riemannian manifolds. Just as and Alexandrov spaces generalize sectional curvature bounds, RCD spaces are a class of metric measure spaces which generalize lower bounds on Ricci curvature. Further examples and applications Graphs and finite metric spaces A if its induced topology is the discrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular, (those having a finite number of points) are studied in combinatorics and theoretical computer science. Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can be isometrically embedded in a Euclidean space or in Hilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points. For any undirected connected graph , the set of vertices of can be turned into a metric space by defining the distance between vertices and to be the length of the shortest edge path connecting them. This is also called shortest-path distance or geodesic distance. In geometric group theory this construction is applied to the Cayley graph of a (typically infinite) finitely-generated group, yielding the word metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set. Metric embeddings and approximations An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics. A significant result in this area is that any finite metric space can be probabilistically embedded into a tree metric with an expected distortion of , where is the number of points in the metric space. This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound of . The tree metrics produced in this embedding dominate the original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure. The result has significant implications for various computational problems: Network design: Improves approximation algorithms for problems like the Group Steiner tree problem (a generalization of the Steiner tree problem) and Buy-at-bulk network design (a problem in Network planning and design) by simplifying the metric space to a tree metric. Clustering: Enhances algorithms for clustering problems where hierarchical clustering can be performed more efficiently on tree metrics. Online algorithms: Benefits problems like the k-server problem and metrical task system by providing better competitive ratios through simplified metrics. The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. The distortion bound has led to improved approximation ratios in several algorithmic problems, demonstrating the practical significance of this theoretical result. Distances between mathematical objects In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples: Functions to a metric space. If is any set and is a metric space, then the set of all bounded functions (i.e. those functions whose image is a bounded subset of ) can be turned into a metric space by defining the distance between two bounded functions and to be This metric is called the uniform metric or supremum metric. If is complete, then this function space is complete as well; moreover, if is also a topological space, then the subspace consisting of all bounded continuous functions from to is also complete. When is a subspace of , this function space is known as a classical Wiener space. String metrics and edit distances. There are many ways of measuring distances between strings of characters, which may represent sentences in computational linguistics or code words in coding theory. Edit distances attempt to measure the number of changes necessary to get from one string to another. For example, the Hamming distance measures the minimal number of substitutions needed, while the Levenshtein distance measures the minimal number of deletions, insertions, and substitutions; both of these can be thought of as distances in an appropriate graph. Graph edit distance is a measure of dissimilarity between two graphs, defined as the minimal number of graph edit operations required to transform one graph into another. Wasserstein metrics measure the distance between two measures on the same metric space. The Wasserstein distance between two measures is, roughly speaking, the cost of transporting one to the other. The set of all by matrices over some field is a metric space with respect to the rank distance . The Helly metric in game theory measures the difference between strategies in a game. Hausdorff and Gromov–Hausdorff distance The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves. Hausdorff and Gromov–Hausdorff distance define metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively. Suppose is a metric space, and let be a subset of . The distance from to a point of is, informally, the distance from to the closest point of . However, since there may not be a single closest point, it is defined via an infimum: In particular, if and only if belongs to the closure of . Furthermore, distances between points and sets satisfy a version of the triangle inequality: and therefore the map defined by is continuous. Incidentally, this shows that metric spaces are completely regular. Given two subsets and of , their Hausdorff distance is Informally, two sets and are close to each other in the Hausdorff distance if no element of is too far from and vice versa. For example, if is an open set in Euclidean space is an ε-net inside , then . In general, the Hausdorff distance can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets of . The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. The Gromov–Hausdorff distance between compact spaces and is the infimum of the Hausdorff distance over all metric spaces that contain and as subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications. Miscellaneous examples Given a metric space and an increasing concave function such that if and only if , then is also a metric on . If for some real number , such a metric is known as a snowflake of . The tight span of a metric space is another metric space which can be thought of as an abstract version of the convex hull. The knight's move metric, the minimal number of knight's moves to reach one point in from another, is a metric on . The British Rail metric (also called the "post office metric" or the "French railway metric") on a normed vector space is given by for distinct points and , and . More generally can be replaced with a function taking an arbitrary set to non-negative reals and taking the value at most once: then the metric is defined on by for distinct points and , and The name alludes to the tendency of railway journeys to proceed via London (or Paris) irrespective of their final destination. The Robinson–Foulds metric used for calculating the distances between Phylogenetic trees in Phylogenetics Constructions Product metric spaces If are metric spaces, and is the Euclidean norm on , then is a metric space, where the product metric is defined by and the induced topology agrees with the product topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained if is the taxicab norm, a p-norm, the maximum norm, or any other norm which is non-decreasing as the coordinates of a positive -tuple increase (yielding the triangle inequality). Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metric The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies of is not first-countable and thus is not metrizable. Quotient metric spaces If is a metric space with metric , and is an equivalence relation on , then we can endow the quotient set with a pseudometric. The distance between two equivalence classes and is defined as where the infimum is taken over all finite sequences and with , , . In general this will only define a pseudometric, i.e. does not necessarily imply that . However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces), is a metric. The quotient metric is characterized by the following universal property. If is a metric (i.e. 1-Lipschitz) map between metric spaces satisfying whenever , then the induced function , given by , is a metric map The quotient metric does not always induce the quotient topology. For example, the topological quotient of the metric space identifying all points of the form is not metrizable since it is not first-countable, but the quotient metric is a well-defined metric on the same set which induces a coarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient. A topological space is sequential if and only if it is a (topological) quotient of a metric space. Generalizations of metric spaces There are several notions of spaces which have less structure than a metric space, but more than a topological space. Uniform spaces are spaces in which distances are not defined, but uniform continuity is. Approach spaces are spaces in which point-to-set distances are defined, instead of point-to-point distances. They have particularly good properties from the point of view of category theory. Continuity spaces are a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains. There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, in functional analysis pseudometrics often come from seminorms on vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term in topology. Extended metrics Some authors define metrics so as to allow the distance function to attain the value ∞, i.e. distances are non-negative numbers on the extended real number line. Such a function is also called an extended metric or "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using a subadditive monotonically increasing bounded function which is zero at zero, e.g. or . Metrics valued in structures other than the real numbers The requirement that the metric take values in can be relaxed to consider metrics with values in other structures, including: Ordered fields, yielding the notion of a generalised metric. More general directed sets. In the absence of an addition operation, the triangle inequality does not make sense and is replaced with an ultrametric inequality. This leads to the notion of a generalized ultrametric. These generalizations still induce a uniform structure on the space. Pseudometrics A pseudometric on is a function which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) only for all is required. In other words, the axioms for a pseudometric are: . In some contexts, pseudometrics are referred to as semimetrics because of their relation to seminorms. Quasimetrics Occasionally, a quasimetric is defined as a function that satisfies all axioms for a metric with the possible exception of symmetry. The name of this generalisation is not entirely standardized. Quasimetrics are common in real life. For example, given a set of mountain villages, the typical walking times between elements of form a quasimetric because travel uphill takes longer than travel downhill. Another example is the length of car rides in a city with one-way streets: here, a shortest path from point to point goes along a different set of streets than a shortest path from to and may have a different length. A quasimetric on the reals can be defined by setting The 1 may be replaced, for example, by infinity or by or any other subadditive function of . This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size by filing it down, but it is difficult or impossible to grow it. Given a quasimetric on , one can define an -ball around to be the set . As in the case of a metric, such balls form a basis for a topology on , but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed) Sorgenfrey line. Metametrics or partial metrics In a metametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are: Metametrics appear in the study of Gromov hyperbolic metric spaces and their boundaries. The visual metametric on such a space satisfies for points on the boundary, but otherwise is approximately the distance from to the boundary. Metametrics were first defined by Jussi Väisälä. In other work, a function satisfying these axioms is called a partial metric or a dislocated metric. Semimetrics A semimetric on is a function that satisfies the first three axioms, but not necessarily the triangle inequality: Some authors work with a weaker form of the triangle inequality, such as: {| | |ρ-relaxed triangle inequality |- | |ρ-inframetric inequality |} The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to as quasimetrics, nearmetrics or inframetrics. The ρ-inframetric inequalities were introduced to model round-trip delay times in the internet. The triangle inequality implies the 2-inframetric inequality, and the ultrametric inequality is exactly the 1-inframetric inequality. Premetrics Relaxing the last three axioms leads to the notion of a premetric, i.e. a function satisfying the following conditions: This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics or pseudometrics; in translations of Russian books it sometimes appears as "prametric". A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance. Any premetric gives rise to a topology as follows. For a positive real , the centered at a point is defined as A set is called open if for any point in the set there is an centered at which is contained in the set. Every premetric space is a topological space, and in fact a sequential space. In general, the themselves need not be open sets with respect to this topology. As for metrics, the distance between two sets and , is defined as This defines a premetric on the power set of a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric. Any premetric gives rise to a preclosure operator as follows: Pseudoquasimetrics The prefixes pseudo-, quasi- and semi- can also be combined, e.g., a pseudoquasimetric (sometimes called hemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the open form a basis of open sets. A very basic example of a pseudoquasimetric space is the set with the premetric given by and The associated topological space is the Sierpiński space. Sets equipped with an extended pseudoquasimetric were studied by William Lawvere as "generalized metric spaces". From a categorical point of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of the metric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients. Lawvere also gave an alternate definition of such spaces as enriched categories. The ordered set can be seen as a category with one morphism if and none otherwise. Using as the tensor product and 0 as the identity makes this category into a monoidal category . Every (extended pseudoquasi-)metric space can now be viewed as a category enriched over : The objects of the category are the points of . For every pair of points and such that , there is a single morphism which is assigned the object of . The triangle inequality and the fact that for all points derive from the properties of composition and identity in an enriched category. Since is a poset, all diagrams that are required for an enriched category commute automatically. Metrics on multisets The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. A multiset is a generalization of the notion of a set in which an element can occur more than once. Define the multiset union as follows: if an element occurs times in and times in then it occurs times in . A function on the set of nonempty finite multisets of elements of a set is a metric if if all elements of are equal and otherwise (positive definiteness) depends only on the (unordered) multiset (symmetry) (triangle inequality) By considering the cases of axioms 1 and 2 in which the multiset has two elements and the case of axiom 3 in which the multisets , , and have one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements. A simple example is the set of all nonempty finite multisets of integers with . More complex examples are information distance in multisets; and normalized compression distance (NCD) in multisets.
Mathematics
Geometry
null
20025
https://en.wikipedia.org/wiki/Multihull
Multihull
A multihull is a boat or ship with more than one hull, whereas a vessel with a single hull is a monohull. The most common multihulls are catamarans (with two hulls), and trimarans (with three hulls). There are other types, with four or more hulls, but such examples are very rare and tend to be specialised for particular functions. Multihull history Single-outrigger boats, double-canoes (catamarans), and double-outrigger boats (trimarans) of the Austronesian peoples are the direct antecedents of modern multihull vessels. They were developed during the Austronesian Expansion (c. 3000 to 1500 BC) which allowed Austronesians to colonize maritime Southeast Asia, Micronesia, Island Melanesia, Madagascar, and Polynesia. These Austronesian vessels are still widely used today by traditional fishermen in Austronesian regions in maritime Southeast Asia, Oceania and Madagascar; as well as areas they were introduced to by Austronesians in ancient times like in the East African coast and in South Asia. Greek sources also describe large third-century BC catamarans, one built under the supervision of Archimedes, the Syracusia, and another reportedly built by Ptolemy IV Philopator of Egypt, the Tessarakonteres. Modern developers Modern pioneers of multihull design include James Wharram (UK), Derek Kelsall (UK), Tom Lack (UK), Lock Crowther (Aust), Hedly Nicol (Aust), Malcolm Tennant (NZ), Jim Brown (USA), Arthur Piver (USA), Chris White (US), Ian Farrier (NZ), LOMOcean (NZ), Darren Newton (UK), Jens Quorning (DK) and Dick Newick (USA). Multihull types Single-outrigger ("proa") A single-outrigger canoe is a canoe with a slender outrigger ("ama") attached by two or more struts ("akas"). This craft will normally be propelled by paddles. Single-outrigger canoes that use sails are usually inaccurately referred to by the name "proa". While single-outrigger canoes and proas both derive stability from the outrigger, the proa has the greater need of the outrigger to counter the heeling effect of the sail. The outrigger on a proa can either be on the lee or windward side, or in a tacking proa, interchangeable. However, more recently, proas tend to keep the outrigger either to leeward or to wind which means that instead of tacking, a "shunt" is required, whereby the bow becomes the stern, and the stern becomes the bow. Catamaran (twin-hull) A catamaran is a vessel with twin hulls. Commercial catamarans began in 17th century England. Separate attempts at steam-powered catamarans were carried out by the middle of the 20th century. However, success required better materials and more developed hydrodynamic technologies. During the second half of the 20th century catamaran designs flourished. Catamaran configurations are used for racing, sailing, tourist and fishing boats. The hulls of a catamaran are typically connected by a bridgedeck, although some simpler cruising catamarans simply have a trampoline stretched between the crossbeams (or "akas"). Small beachable catamarans, such as the Hobie Cat, also have only a trampoline between the hulls. Catamarans derive stability from the distance between the hulls—transverse clearance—the greater this distance, the greater the stability. Typically, catamaran hulls are slim, although they may flare above the waterline to give reserve buoyancy. The vertical clearance between the design waterplane and the bottom of the bridge deck determines the likelihood of contact with waves. Increased vertical clearance diminishes such contact and increases seaworthiness, within limits. Trimaran (double-outrigger) A trimaran (or double-outrigger) is a vessel with two outrigger floats attached on either side of a main hull by a crossbeam, wing, or other form of superstructure. They are derived from traditional double-outrigger vessels of maritime Southeast Asia. Despite not being traditionally Polynesian, western trimarans use traditional Polynesian terms for the hull (vaka), the floats (ama), and connectors (aka). The word trimaran is a portmanteau of tri and (cata)maran, a term that is thought to have been coined by Victor Tchetchet, a pioneering modern multihull designer, born in Ukraine (at that time part of the Russian Empire). Some trimaran configurations use the outlying hulls to enhance stability and allow for shallow draft, examples include the experimental ship RV Triton and the Independence class of littoral combat ships (US). Four and five hulls Some multihulls with four (quadrimaran) or five (pentamaran) hulls have been proposed; few have been built. In 2018 a Swiss entrepreneur sought funding to build a sail-driven quadrimaran called Manta that would use solar power to scoop plastic from the ocean. Manta was still under development as of the end of 2023. A French manufacturer, Tera-4, produces motor quadrimarans which use aerodynamic lift between the four hulls to promote planing and reduce power consumption. Design concepts for vessels with two pair of outriggers have been referred to as pentamarans. The design concept comprises a narrow, long hull that cuts through waves. The outriggers then provide the stability that such a narrow hull needs. While the aft sponsons act as trimaran sponsons do, the front sponsons do not touch the water normally; only if the ship rolls to one side do they provide added buoyancy to correct the roll. BMT Group, a shipbuilding and engineering company in the UK, has proposed a fast cargo ship and a yacht using this kind of hull. SWATH multihulls Multihull designs may have hull beams that are slimmer at the water surface ("waterplane") than underwater. This arrangement allows good wave-piercing, while keeping a buoyant hydrodynamic hull beneath the waterplane. In a catamaran configuration this is called a small waterplane area twin hull, or SWATH. While SWATHs are stable in rough seas, they have the drawbacks, compared with other catamarans, of having a deeper draft, being more sensitive to loading, and requiring more power because of their higher underwater surface areas. Triple-hull configurations of small waterplane area craft had been studied, but not built, as of 2008. Performance Each hull of a multihull vessel can be narrower than that of a monohull with the same displacement and long, narrow hulls, a multihull typically produces very small bow waves and wakes, a consequence of a favorable Froude number. Vessels with beamy hulls (typically monohulls) normally create a large bow wave and wake. Such a vessel is limited by its "hull speed", being unable to "climb over" its bow wave unless it changes from displacement mode to planing mode. Vessels with slim hulls (typically multihulls) will normally create no appreciable bow wave to limit their progress. In 1978, 101 years after catamarans like Amaryllis were banned from yacht racing they returned to the sport. This started with the victory of the trimaran Olympus Photo, skippered by Mike Birch in the first Route du Rhum. Thereafter, no open ocean race was won by a monohull. Winning times dropped by 70%, since 1978. Olympus Photo's 23-day 6 hr 58' 35" success dropped to Gitana 11's 7d 17h 19'6", in 2006. Around 2016 the first large wind driven foil-borne racing catamarans were built. These cats rise onto foils and T-foiled rudders only at higher speeds. Sailing multihulls and workboats The increasing popularity of catamaran since the 1960s is down to the added space, speed, shallow draft, and lack of heeling underway. The stability of a multihull makes sailing much less tiring for the crew, and is particularly suitable for families. Having no need for ballast for stability, multihulls are much lighter than monohull sailboats; but a multihull's fine hull sections mean that one must take care not to overload the vessel. Powerboats catamarans are increasingly used for racing, cruising and as workboats and fishing boats. Speed, the stable working platform, safety, and added space are the prime advantages for power cats. "The weight of a multihull, of this length, is probably not much more than half the weight of a monohull of the same length and it can be sailed with less crew effort." Racing catamarans and trimarans are popular in France, New Zealand and Australia. Cruising cats are commonest in the Caribbean and Mediterranean (where they form the bulk of the charter business) and Australia. Multihulls are less common in the US, perhaps because their increased beam require wider dock/slips. Smaller multihulls may be collapsible and trailerable, and thus suitable for daybooks and racers. Until the 1960s most multihull sailboats (except for beach cats) were built either by their owners or by boat builders; since then companies have been selling mass-produced boats, of which there are more than 150 models.
Technology
Naval transport
null
20039
https://en.wikipedia.org/wiki/Merge%20sort
Merge sort
In computer science, merge sort (also commonly spelled as mergesort and as ) is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the relative order of equal elements is the same in the input and output. Merge sort is a divide-and-conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up merge sort appeared in a report by Goldstine and von Neumann as early as 1948. Algorithm Conceptually, a merge sort works as follows: Divide the unsorted list into n sub-lists, each containing one element (a list of one element is considered sorted). Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list. Top-down implementation Example C-like code using indices for top-down merge sort algorithm that recursively splits the list (called runs in this example) into sublists until sublist size is 1, then merges those sublists to produce a sorted list. The copy back step is avoided with alternating the direction of the merge with each level of recursion (except for an initial one-time copy, that can be avoided too). As a simple example, consider an array with two elements. The elements are copied to B[], then merged back to A[]. If there are four elements, when the bottom of the recursion level is reached, single element runs from A[] are merged to B[], and then at the next higher level of recursion, those two-element runs are merged to A[]. This pattern continues with each level of recursion. // Array A[] has the items to sort; array B[] is a work array. void TopDownMergeSort(A[], B[], n) { CopyArray(A, 0, n, B); // one time copy of A[] to B[] TopDownSplitMerge(A, 0, n, B); // sort data from B[] into A[] } // Split A[] into 2 runs, sort both runs into B[], merge both runs from B[] to A[] // iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set). void TopDownSplitMerge(B[], iBegin, iEnd, A[]) { if (iEnd - iBegin <= 1) // if run size == 1 return; // consider it sorted // split the run longer than 1 item into halves iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point // recursively sort both runs from array A[] into B[] TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run // merge the resulting runs from array B[] into A[] TopDownMerge(B, iBegin, iMiddle, iEnd, A); } // Left source half is A[ iBegin:iMiddle-1]. // Right source half is A[iMiddle:iEnd-1 ]. // Result is B[ iBegin:iEnd-1 ]. void TopDownMerge(B[], iBegin, iMiddle, iEnd, A[]) { i = iBegin, j = iMiddle; // While there are elements in the left or right runs... for (k = iBegin; k < iEnd; k++) { // If left run head exists and is <= existing right run head. if (i < iMiddle && (j >= iEnd || A[i] <= A[j])) { B[k] = A[i]; i = i + 1; } else { B[k] = A[j]; j = j + 1; } } } void CopyArray(A[], iBegin, iEnd, B[]) { for (k = iBegin; k < iEnd; k++) B[k] = A[k]; } Sorting the entire array is accomplished by . Bottom-up implementation Example C-like code using indices for bottom-up merge sort algorithm which treats the list as an array of n sublists (called runs in this example) of size 1, and iteratively merges sub-lists back and forth between two buffers: // array A[] has the items to sort; array B[] is a work array void BottomUpMergeSort(A[], B[], n) { // Each 1-element run in A is already "sorted". // Make successively longer sorted runs of length 2, 4, 8, 16... until the whole array is sorted. for (width = 1; width < n; width = 2 * width) { // Array A is full of runs of length width. for (i = 0; i < n; i = i + 2 * width) { // Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[] // or copy A[i:n-1] to B[] ( if (i+width >= n) ) BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B); } // Now work array B is full of runs of length 2*width. // Copy array B to array A for the next iteration. // A more efficient implementation would swap the roles of A and B. CopyArray(B, A, n); // Now array A is full of runs of length 2*width. } } // Left run is A[iLeft :iRight-1]. // Right run is A[iRight:iEnd-1 ]. void BottomUpMerge(A[], iLeft, iRight, iEnd, B[]) { i = iLeft, j = iRight; // While there are elements in the left or right runs... for (k = iLeft; k < iEnd; k++) { // If left run head exists and is <= existing right run head. if (i < iRight && (j >= iEnd || A[i] <= A[j])) { B[k] = A[i]; i = i + 1; } else { B[k] = A[j]; j = j + 1; } } } void CopyArray(B[], A[], n) { for (i = 0; i < n; i++) A[i] = B[i]; } Top-down implementation using lists Pseudocode for top-down merge sort algorithm which recursively divides the input list into smaller sublists until the sublists are trivially sorted, and then merges the sublists while returning up the call chain. function merge_sort(list m) is // Base case. A list of zero or one elements is sorted, by definition. if length of m ≤ 1 then return m // Recursive case. First, divide the list into equal-sized sublists // consisting of the first half and second half of the list. // This assumes lists start at index 0. var left := empty list var right := empty list for each x with index i in m do if i < (length of m)/2 then add x to left else add x to right // Recursively sort both sublists. left := merge_sort(left) right := merge_sort(right) // Then merge the now-sorted sublists. return merge(left, right) In this example, the function merges the left and right sublists. function merge(left, right) is var result := empty list while left is not empty and right is not empty do if first(left) ≤ first(right) then append first(left) to result left := rest(left) else append first(right) to result right := rest(right) // Either left or right may have elements left; consume them. // (Only one of the following loops will actually be entered.) while left is not empty do append first(left) to result left := rest(left) while right is not empty do append first(right) to result right := rest(right) return result Bottom-up implementation using lists Pseudocode for bottom-up merge sort algorithm which uses a small fixed size array of references to nodes, where array[i] is either a reference to a list of size 2i or nil. node is a reference or pointer to a node. The merge() function would be similar to the one shown in the top-down merge lists example, it merges two already sorted lists, and handles empty lists. In this case, merge() would use node for its input parameters and return value. function merge_sort(node head) is // return if empty list if head = nil then return nil var node array[32]; initially all nil var node result var node next var int i result := head // merge nodes into array while result ≠ nil do next := result.next; result.next := nil for (i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do result := merge(array[i], result) array[i] := nil // do not go past end of array if i = 32 then i -= 1 array[i] := result result := next // merge array into single list result := nil for (i = 0; i < 32; i += 1) do result := merge(array[i], result) return result Top-down implementation in a declarative style Haskell-like pseudocode, showing how merge sort can be implemented in such a language using constructs and ideas from functional programming.merge_sort :: [a] -> [a] merge_sort([]) = [] merge_sort([x]) = [x] merge_sort(xs) = merge(merge_sort(left), merge_sort(right)) where (left, right) = split(xs, length(xs) / 2) merge :: ([a], [a]) -> [a] merge([], xs) = xs merge(xs, []) = xs merge(x : xs, y : ys) | if x ≤ y = x : merge(xs, y : ys) | else = y : merge(x : xs, ys) Analysis In sorting n objects, merge sort has an average and worst-case performance of O(n log n) comparisons. If the running time (number of comparisons) of merge sort for a list of length n is T(n), then the recurrence relation T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists). The closed form follows from the master theorem for divide-and-conquer recurrences. The number of comparisons made by merge sort in the worst case is given by the sorting numbers. These numbers are equal to or slightly smaller than (n ⌈lg n⌉ − 2⌈lg n⌉ + 1), which is between (n lg n − n + 1) and (n lg n + n + O(lg n)). Merge sort's best case takes about half as many iterations as its worst case. For large n and a randomly ordered input list, merge sort's expected (average) number of comparisons approaches α·n fewer than the worst case, where In the worst case, merge sort uses approximately 39% fewer comparisons than quicksort does in its average case, and in terms of moves, merge sort's worst case complexity is O(n log n) - the same complexity as quicksort's best case. Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as Lisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort. Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for variations that need only n/2 extra spaces). Natural merge sort A natural merge sort is similar to a bottom-up merge sort except that any naturally occurring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (alternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being convenient data structures (used as FIFO queues or LIFO stacks). In the bottom-up merge sort, the starting point assumes each run is one item long. In practice, random input data will have many short runs that just happen to be sorted. In the typical case, the natural merge sort may not need as many passes because there are fewer runs to merge. In the best case, the input is already sorted (i.e., is one run), so the natural merge sort need only make one pass through the data. In many practical cases, long natural runs are present, and for that reason natural merge sort is exploited as the key component of Timsort. Example: Start : 3 4 2 1 7 5 8 9 0 6 Select runs : (3 4)(2)(1 7)(5 8 9)(0 6) Merge : (2 3 4)(1 5 7 8 9)(0 6) Merge : (1 2 3 4 5 7 8 9)(0 6) Merge : (0 1 2 3 4 5 6 7 8 9) Formally, the natural merge sort is said to be Runs-optimal, where is the number of runs in , minus one. Tournament replacement selection sorts are used to gather the initial runs for external sorting algorithms. Ping-pong merge sort Instead of merging two blocks at a time, a ping-pong merge merges four blocks at a time. The four sorted blocks are merged simultaneously to auxiliary space into two sorted blocks, then the two sorted blocks are merged back to main memory. Doing so omits the copy operation and reduces the total number of moves by half. An early public domain implementation of a four-at-once merge was by WikiSort in 2014, the method was later that year described as an optimization for patience sorting and named a ping-pong merge. Quadsort implemented the method in 2020 and named it a quad merge. In-place merge sort One drawback of merge sort, when implemented on arrays, is its working memory requirement. Several methods to reduce memory or make merge sort fully in-place have been suggested: suggested an alternative version of merge sort that uses constant additional space. Katajainen et al. present an algorithm that requires a constant amount of working memory: enough storage space to hold one element of the input array, and additional space to hold pointers into the input array. They achieve an time bound with small constants, but their algorithm is not stable. Several attempts have been made at producing an in-place merge algorithm that can be combined with a standard (top-down or bottom-up) merge sort to produce an in-place merge sort. In this case, the notion of "in-place" can be relaxed to mean "taking logarithmic stack space", because standard merge sort requires that amount of space for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is possible in time using a constant amount of scratch space, but their algorithm is complicated and has high constant factors: merging arrays of length and can take moves. This high constant factor and complicated in-place algorithm was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston presented a straightforward linear time algorithm practical in-place merge to merge a sorted list using fixed amount of additional space. They both have used the work of Kronrod and others. It merges in linear time and constant extra space. The algorithm takes little more average time than standard merge sort algorithms, free to exploit temporary extra memory cells, by less than a factor of two. Though the algorithm is much faster in a practical way, it is unstable for some lists. But using similar concepts, they have been able to solve this problem. Other in-place algorithms include SymMerge, which takes time in total and is stable. Plugging such an algorithm into merge sort increases its complexity to the non-linearithmic, but still quasilinear, . Many applications of external sorting use a form of merge sorting where the input gets split up to a higher number of sublists, ideally to a number for which merging them still makes the currently processed set of pages fit into main memory. A modern stable, linear, and in-place merge variant is block merge sort, which creates a section of unique values to use as swap space. The space overhead can be reduced to by using binary searches and rotations. This method is employed by the C++ STL library and quadsort. An alternative to reduce the copying into multiple lists is to associate a new field of information with each key (the elements in m are called keys). This field will be used to link the keys and any associated information together in a sorted list (a key and its related information is called a record). Then the merging of the sorted lists proceeds by changing the link values; no records need to be moved at all. A field which contains only a link will generally be smaller than an entire record so less space will also be used. This is a standard sorting technique, not restricted to merge sort. A simple way to reduce the space overhead to n/2 is to maintain left and right as a combined structure, copy only the left part of m into temporary space, and to direct the merge routine to place the merged output into m. With this version it is better to allocate the temporary space outside the merge routine, so that only one allocation is needed. The excessive copying mentioned previously is also mitigated, since the last pair of lines before the return result statement (function merge in the pseudo code above) become superfluous. Use with tape drives An external merge sort is practical to run using disk or tape drives when the data to be sorted is too large to fit into memory. External sorting explains how merge sort is implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is sequential (except for rewinds at the end of each pass). A minimal implementation can get by with just two record buffers and a few program variables. Naming the four tape drives as A, B, C, D, with the original data on A, and using only two record buffers, the algorithm is similar to the bottom-up implementation, using pairs of tape drives instead of arrays in memory. The basic algorithm can be described as follows: Merge pairs of records from A; writing two-record sublists alternately to C and D. Merge two-record sublists from C and D into four-record sublists; writing these alternately to A and B. Merge four-record sublists from A and B into eight-record sublists; writing these alternately to C and D Repeat until you have one list containing all the data, sorted—in log2(n) passes. Instead of starting with very short runs, usually a hybrid algorithm is used, where the initial pass will read many records into memory, do an internal sort to create a long run, and then distribute those long runs onto the output set. The step avoids many early passes. For example, an internal sort of 1024 records will save nine passes. The internal sort is often large because it has such a benefit. In fact, there are techniques that can make the initial runs longer than the available internal memory. One of them, the Knuth's 'snowplow' (based on a binary min-heap), generates runs twice as long (on average) as a size of memory used. With some overhead, the above algorithm can be modified to use three tapes. O(n log n) running time can also be achieved using two queues, or a stack and a queue, or three stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can reduce the number of tape operations in O(log k) times by using a k/2-way merge. A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase merge sort. Optimizing merge sort On modern computers, locality of reference can be of paramount importance in software optimization, because multilevel memory hierarchies are used. Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, the algorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a CPU's cache. Each of these subarrays is sorted with an in-place sorting algorithm such as insertion sort, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance on machines that benefit from cache optimization. Parallel merge sort Merge sort parallelizes well due to the use of the divide-and-conquer method. Several different parallel variants of the algorithm have been developed over the years. Some parallel merge sort algorithms are strongly related to the sequential top-down merge algorithm while others have a different general structure and use the K-way merge method. Merge sort with parallel recursion The sequential merge sort procedure can be described in two phases, the divide phase and the merge phase. The first consists of many recursive calls that repeatedly perform the same division process until the subsequences are trivially sorted (containing one or no element). An intuitive approach is the parallelization of those recursive calls. Following pseudocode describes the merge sort with parallel recursion using the fork and join keywords: // Sort elements lo through hi (exclusive) of array A. algorithm mergesort(A, lo, hi) is if lo+1 < hi then // Two or more elements. mid := ⌊(lo + hi) / 2⌋ fork mergesort(A, lo, mid) mergesort(A, mid, hi) join merge(A, lo, mid, hi) This algorithm is the trivial modification of the sequential version and does not parallelize well. Therefore, its speedup is not very impressive. It has a span of , which is only an improvement of compared to the sequential version (see Introduction to Algorithms). This is mainly due to the sequential merge method, as it is the bottleneck of the parallel executions. Merge sort with parallel merging Better parallelism can be achieved by using a parallel merge algorithm. Cormen et al. present a binary variant that merges two sorted sub-sequences into one sorted output sequence. In one of the sequences (the longer one if unequal length), the element of the middle index is selected. Its position in the other sequence is determined in such a way that this sequence would remain sorted if this element were inserted at this position. Thus, one knows how many other elements from both sequences are smaller and the position of the selected element in the output sequence can be calculated. For the partial sequences of the smaller and larger elements created in this way, the merge algorithm is again executed in parallel until the base case of the recursion is reached. The following pseudocode shows the modified parallel merge sort method using the parallel merge algorithm (adopted from Cormen et al.). /** * A: Input array * B: Output array * lo: lower bound * hi: upper bound * off: offset */ algorithm parallelMergesort(A, lo, hi, B, off) is len := hi - lo + 1 if len == 1 then B[off] := A[lo] else let T[1..len] be a new array mid := ⌊(lo + hi) / 2⌋ mid' := mid - lo + 1 fork parallelMergesort(A, lo, mid, T, 1) parallelMergesort(A, mid + 1, hi, T, mid' + 1) join parallelMerge(T, 1, mid', mid' + 1, len, B, off) In order to analyze a recurrence relation for the worst case span, the recursive calls of parallelMergesort have to be incorporated only once due to their parallel execution, obtaining For detailed information about the complexity of the parallel merge procedure, see Merge algorithm. The solution of this recurrence is given by This parallel merge algorithm reaches a parallelism of , which is much higher than the parallelism of the previous algorithm. Such a sort can perform well in practice when combined with a fast stable sequential sort, such as insertion sort, and a fast sequential merge as a base case for merging small arrays. Parallel multiway merge sort It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there are usually p > 2 processors available. A better approach may be to use a K-way merge method, a generalization of binary merge, in which sorted sequences are merged. This merge variant is well suited to describe a sorting algorithm on a PRAM. Basic idea Given an unsorted sequence of elements, the goal is to sort the sequence with available processors. These elements are distributed equally among all processors and sorted locally using a sequential Sorting algorithm. Hence, the sequence consists of sorted sequences of length . For simplification let be a multiple of , so that for . These sequences will be used to perform a multisequence selection/splitter selection. For , the algorithm determines splitter elements with global rank . Then the corresponding positions of in each sequence are determined with binary search and thus the are further partitioned into subsequences with . Furthermore, the elements of are assigned to processor , means all elements between rank and rank , which are distributed over all . Thus, each processor receives a sequence of sorted sequences. The fact that the rank of the splitter elements was chosen globally, provides two important properties: On the one hand, was chosen so that each processor can still operate on elements after assignment. The algorithm is perfectly load-balanced. On the other hand, all elements on processor are less than or equal to all elements on processor . Hence, each processor performs the p-way merge locally and thus obtains a sorted sequence from its sub-sequences. Because of the second property, no further p-way-merge has to be performed, the results only have to be put together in the order of the processor number. Multi-sequence selection In its simplest form, given sorted sequences distributed evenly on processors and a rank , the task is to find an element with a global rank in the union of the sequences. Hence, this can be used to divide each in two parts at a splitter index , where the lower part contains only elements which are smaller than , while the elements bigger than are located in the upper part. The presented sequential algorithm returns the indices of the splits in each sequence, e.g. the indices in sequences such that has a global rank less than and . algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is for i = 1 to p do (l_i, r_i) = (0, |S_i|-1) while there exists i: l_i < r_i do // pick Pivot Element in S_j[l_j], .., S_j[r_j], chose random j uniformly v := pickPivot(S, l, r) for i = 1 to p do m_i = binarySearch(v, S_i[l_i, r_i]) // sequentially if m_1 + ... + m_p >= k then // m_1+ ... + m_p is the global rank of v r := m // vector assignment else l := m return l For the complexity analysis the PRAM model is chosen. If the data is evenly distributed over all , the p-fold execution of the binarySearch method has a running time of . The expected recursion depth is as in the ordinary Quickselect. Thus the overall expected running time is . Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel such that all splitter elements of rank for are found simultaneously. These splitter elements can then be used to partition each sequence in parts, with the same total running time of . Pseudocode Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We assume that there is a barrier synchronization before and after the multisequence selection such that every processor can determine the splitting elements and the sequence partition properly. /** * d: Unsorted Array of Elements * n: Number of Elements * p: Number of Processors * return Sorted Array */ algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is o := new Array[0, n] // the output array for i = 1 to p do in parallel // each processor in parallel S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p sort(S_i) // sort locally synch v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank i * n/p synch (S_i,1, ..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i into subsequences o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign to output array return o Analysis Firstly, each processor sorts the assigned elements locally using a sorting algorithm with complexity . After that, the splitter elements have to be calculated in time . Finally, each group of splits have to be merged in parallel by each processor with a running time of using a sequential p-way merge algorithm. Thus, the overall running time is given by . Practical adaption and application The multiway merge sort algorithm is very scalable through its high parallelization capability, which allows the use of many processors. This makes the algorithm a viable candidate for sorting large amounts of data, such as those processed in computer clusters. Also, since in such systems memory is usually not a limiting resource, the disadvantage of space complexity of merge sort is negligible. However, other factors become important in such systems, which are not taken into account when modelling on a PRAM. Here, the following aspects need to be considered: Memory hierarchy, when the data does not fit into the processors cache, or the communication overhead of exchanging data between processors, which could become a bottleneck when the data can no longer be accessed via the shared memory. Sanders et al. have presented in their paper a bulk synchronous parallel algorithm for multilevel multiway mergesort, which divides processors into groups of size . All processors sort locally first. Unlike single level multiway mergesort, these sequences are then partitioned into parts and assigned to the appropriate processor groups. These steps are repeated recursively in those groups. This reduces communication and especially avoids problems with many small messages. The hierarchical structure of the underlying real network can be used to define the processor groups (e.g. racks, clusters,...). Further variants Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with Richard Cole using a clever subsampling algorithm to ensure O(1) merge. Other sophisticated parallel sorting algorithms can achieve the same or better time bounds with a lower constant. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW parallel random-access machine (PRAM) with n processors by performing partitioning implicitly. Powers further shows that a pipelined version of Batcher's Bitonic Mergesort at O((log n)2) time on a butterfly sorting network is in practice actually faster than his O(log n) sorts on a PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix and parallel sorting. Comparison with other sort algorithms Although heapsort has the same time bounds as merge sort, it requires only Θ(1) auxiliary space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort implementations generally outperform merge sort for sorting RAM-based arrays. Quicksorts are preferred when the data size to be sorted is lesser, since the space complexity for quicksort is O(log n), it helps in utilizing cache locality better than merge sort (with space complexity O(n)). On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-access sequential media. Merge sort is often the best choice for sorting a linked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible. As of Perl 5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl). In Java, the Arrays.sort() methods use merge sort or a tuned quicksort depending on the datatypes and for implementation efficiency switch to insertion sort when fewer than seven array elements are being sorted. The Linux kernel uses merge sort for its linked lists. Timsort, a tuned hybrid of merge sort and insertion sort is used in variety of software platforms and languages including the Java and Android platforms and is used by Python since version 2.3; since version 3.11, Timsort's merge policy was updated to Powersort.
Mathematics
Algorithms
null
20059
https://en.wikipedia.org/wiki/Multiplicative%20function
Multiplicative function
In number theory, a multiplicative function is an arithmetic function f(n) of a positive integer n with the property that f(1) = 1 and whenever a and b are coprime. An arithmetic function f(n) is said to be completely multiplicative (or totally multiplicative) if f(1) = 1 and f(ab) = f(a)f(b) holds for all positive integers a and b, even when they are not coprime. Examples Some multiplicative functions are defined to make formulas easier to write: 1(n): the constant function, defined by 1(n) = 1 (completely multiplicative) Id(n): identity function, defined by Id(n) = n (completely multiplicative) Idk(n): the power functions, defined by Idk(n) = nk for any complex number k (completely multiplicative). As special cases we have Id0(n) = 1(n) and Id1(n) = Id(n). ε(n): the function defined by ε(n) = 1 if n = 1 and 0 otherwise, sometimes called multiplication unit for Dirichlet convolution or simply the unit function (completely multiplicative). Sometimes written as u(n), but not to be confused with μ(n) . 1C(n), the indicator function of the set C ⊂ Z, for certain sets C. The indicator function 1C(n) is multiplicative precisely when the set C has the following property for any coprime numbers a and b: the product ab is in C if and only if the numbers a and b are both themselves in C. This is the case if C is the set of squares, cubes, or k-th powers. There are also other sets (not closed under multiplication) that give rise to such functions, such as the set of square-free numbers. Other examples of multiplicative functions include many functions of importance in number theory, such as: gcd(n,k): the greatest common divisor of n and k, as a function of n, where k is a fixed integer. : Euler's totient function , counting the positive integers coprime to (but not bigger than) n μ(n): the Möbius function, the parity (−1 for odd, +1 for even) of the number of prime factors of square-free numbers; 0 if n is not square-free σk(n): the divisor function, which is the sum of the k-th powers of all the positive divisors of n (where k may be any complex number). Special cases we have σ0(n) = d(n) the number of positive divisors of n, σ1(n) = σ(n), the sum of all the positive divisors of n. The sum of the k-th powers of the unitary divisors is denoted by σ*k(n): a(n): the number of non-isomorphic abelian groups of order n. λ(n): the Liouville function, λ(n) = (−1)Ω(n) where Ω(n) is the total number of primes (counted with multiplicity) dividing n. (completely multiplicative). γ(n), defined by γ(n) = (−1)ω(n), where the additive function ω(n) is the number of distinct primes dividing n. τ(n): the Ramanujan tau function. All Dirichlet characters are completely multiplicative functions. For example (n/p), the Legendre symbol, considered as a function of n where p is a fixed prime number. An example of a non-multiplicative function is the arithmetic function r2(n) - the number of representations of n as a sum of squares of two integers, positive, negative, or zero, where in counting the number of ways, reversal of order is allowed. For example: and therefore r2(1) = 4 ≠ 1. This shows that the function is not multiplicative. However, r2(n)/4 is multiplicative. In the On-Line Encyclopedia of Integer Sequences, sequences of values of a multiplicative function have the keyword "mult". See arithmetic function for some other examples of non-multiplicative functions. Properties A multiplicative function is completely determined by its values at the powers of prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then f(n) = f(pa) f(qb) ... This property of multiplicative functions significantly reduces the need for computation, as in the following examples for n = 144 = 24 · 32: Similarly, we have: In general, if f(n) is a multiplicative function and a, b are any two positive integers, then Every completely multiplicative function is a homomorphism of monoids and is completely determined by its restriction to the prime numbers. Convolution If f and g are two multiplicative functions, one defines a new multiplicative function , the Dirichlet convolution of f and g, by where the sum extends over all positive divisors d of n. With this operation, the set of all multiplicative functions turns into an abelian group; the identity element is ε. Convolution is commutative, associative, and distributive over addition. Relations among the multiplicative functions discussed above include: (the Möbius inversion formula) (generalized Möbius inversion) The Dirichlet convolution can be defined for general arithmetic functions, and yields a ring structure, the Dirichlet ring. The Dirichlet convolution of two multiplicative functions is again multiplicative. A proof of this fact is given by the following expansion for relatively prime : Dirichlet series for some multiplicative functions More examples are shown in the article on Dirichlet series. Rational arithmetical functions An arithmetical function f is said to be a rational arithmetical function of order if there exists completely multiplicative functions g1,...,gr, h1,...,hs such that where the inverses are with respect to the Dirichlet convolution. Rational arithmetical functions of order are known as totient functions, and rational arithmetical functions of order are known as quadratic functions or specially multiplicative functions. Euler's function is a totient function, and the divisor function is a quadratic function. Completely multiplicative functions are rational arithmetical functions of order . Liouville's function is completely multiplicative. The Möbius function is a rational arithmetical function of order . By convention, the identity element under the Dirichlet convolution is a rational arithmetical function of order . All rational arithmetical functions are multiplicative. A multiplicative function f is a rational arithmetical function of order if and only if its Bell series is of the form for all prime numbers . The concept of a rational arithmetical function originates from R. Vaidyanathaswamy (1931). Busche-Ramanujan identities A multiplicative function is said to be specially multiplicative if there is a completely multiplicative function such that for all positive integers and , or equivalently for all positive integers and , where is the Möbius function. These are known as Busche-Ramanujan identities. In 1906, E. Busche stated the identity and, in 1915, S. Ramanujan gave the inverse form for . S. Chowla gave the inverse form for general in 1929, see P. J. McCarthy (1986). The study of Busche-Ramanujan identities begun from an attempt to better understand the special cases given by Busche and Ramanujan. It is known that quadratic functions satisfy the Busche-Ramanujan identities with . In fact, quadratic functions are exactly the same as specially multiplicative functions. Totients satisfy a restricted Busche-Ramanujan identity. For further details, see R. Vaidyanathaswamy (1931). Multiplicative function over Let , the polynomial ring over the finite field with q elements. A is a principal ideal domain and therefore A is a unique factorization domain. A complex-valued function on A is called multiplicative if whenever f and g are relatively prime. Zeta function and Dirichlet series in Let h be a polynomial arithmetic function (i.e. a function on set of monic polynomials over A). Its corresponding Dirichlet series is defined to be where for set if and otherwise. The polynomial zeta function is then Similar to the situation in , every Dirichlet series of a multiplicative function h has a product representation (Euler product): where the product runs over all monic irreducible polynomials P. For example, the product representation of the zeta function is as for the integers: Unlike the classical zeta function, is a simple rational function: In a similar way, If f and g are two polynomial arithmetic functions, one defines f * g, the Dirichlet convolution of f and g, by where the sum is over all monic divisors d of m, or equivalently over all pairs (a, b) of monic polynomials whose product is m. The identity still holds. Multivariate Multivariate functions can be constructed using multiplicative model estimators. Where a matrix function of is defined as a sum can be distributed across the product For the efficient estimation of , the following two nonparametric regressions can be considered: and Thus it gives an estimate value of with a local likelihood function for with known and unknown . Generalizations An arithmetical function is quasimultiplicative if there exists a nonzero constant such that for all positive integers with . This concept originates by Lahiri (1972). An arithmetical function is semimultiplicative if there exists a nonzero constant , a positive integer and a multiplicative function such that for all positive integers (under the convention that if is not a positive integer.) This concept is due to David Rearick (1966). An arithmetical function is Selberg multiplicative if for each prime there exists a function on nonnegative integers with for all but finitely many primes such that for all positive integers , where is the exponent of in the canonical factorization of . See Selberg (1977). It is known that the classes of semimultiplicative and Selberg multiplicative functions coincide. They both satisfy the arithmetical identity for all positive integers . See Haukkanen (2012). It is well known and easy to see that multiplicative functions are quasimultiplicative functions with and quasimultiplicative functions are semimultiplicative functions with .
Mathematics
Functions: General
null
20063
https://en.wikipedia.org/wiki/MPEG-4
MPEG-4
MPEG-4 is a group of international standards for the compression of digital audio and visual data, multimedia systems, and file storage formats. It was originally introduced in late 1998 as a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG) (ISO/IEC JTC 1/SC29/WG11) under the formal standard ISO/IEC 14496 – Coding of audio-visual objects. Uses of MPEG-4 include compression of audiovisual data for Internet video and CD distribution, voice (telephone, videophone) and broadcast television applications. The MPEG-4 standard was developed by a group led by Touradj Ebrahimi (later the JPEG president) and Fernando Pereira. Background MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2 and other related standards, adding new features such as (extended) VRML support for 3D rendering, object-oriented composite files (including audio, video and VRML objects), support for externally specified digital rights management and various types of interactivity. AAC (Advanced Audio Coding) was standardized as an adjunct to MPEG-2 (as Part 1) before MPEG-4 was issued. MPEG-4 is still an evolving standard and is divided into a number of parts. Companies promoting MPEG-4 compatibility do not always clearly state which "part" level compatibility they are referring to. The key parts to be aware of are MPEG-4 Part 2 (including Advanced Simple Profile, used by codecs such as DivX, Xvid, Nero Digital, RealMedia, 3ivx, H.263 and by QuickTime 6) and MPEG-4 part 10 (MPEG-4 AVC/H.264 or Advanced Video Coding, used by the x264 encoder, Nero Digital AVC, QuickTime 7, Flash Video, and high-definition video media like Blu-ray Disc). Most of the features included in MPEG-4 are left to individual developers to decide whether or not to implement. This means that there are probably no complete implementations of the entire MPEG-4 set of standards. To deal with this, the standard includes the concept of "profiles" and "levels", allowing a specific set of capabilities to be defined in a manner appropriate for a subset of applications. Initially, MPEG-4 was aimed primarily at low-bit-rate video communications; however, its scope as a multimedia coding standard was later expanded. MPEG-4 is efficient across a variety of bit rates ranging from a few kilobits per second to tens of megabits per second. MPEG-4 provides the following functions: Improved coding efficiency over MPEG-2 Ability to encode mixed media data (video, audio, speech) Error resilience to enable robust transmission Ability to interact with the audio-visual scene generated at the receiver Overview MPEG-4 provides a series of technologies for developers, for various service-providers and for end users: MPEG-4 enables different software and hardware developers to create multimedia objects possessing better abilities of adaptability and flexibility to improve the quality of such services and technologies as digital television, animation graphics, the World Wide Web and their extensions. Data network providers can use MPEG-4 for data transparency. With the help of standard procedures, MPEG-4 data can be interpreted and transformed into other signal types compatible with any available network. The MPEG-4 format provides end users with a wide range of interaction with various animated objects. Standardized digital rights management signaling, otherwise known in the MPEG community as Intellectual Property Management and Protection (IPMP). The MPEG-4 format can perform various functions, among which might be the following: Multiplexes and synchronizes data, associated with media objects, in such a way that they can be efficiently transported further via network channels. Interaction with the audio-visual scene, which is formed on the side of the receiver. Profiles and Levels MPEG-4 provides a large and rich set of tools for encoding. Subsets of the MPEG-4 tool sets have been provided for use in specific applications. These subsets, called 'Profiles', limit the size of the tool set a decoder is required to implement. In order to restrict computational complexity, one or more 'Levels' are set for each Profile. A Profile and Level combination allows: A codec builder to implement only the subset of the standard needed, while maintaining interworking with other MPEG-4 devices that implement the same combination. Checking whether MPEG-4 devices comply with the standard, referred to as conformance testing. MPEG-4 Parts MPEG-4 consists of several standards—termed "parts"—including the following (each part covers a certain aspect of the whole specification): Profiles are also defined within the individual "parts", so an implementation of a part is ordinarily not an implementation of an entire part. MPEG-1, MPEG-2, MPEG-7 and MPEG-21 are other suites of MPEG standards. Licensing MPEG-4 contains patented technologies, the use of which requires licensing in countries that acknowledge software algorithm patents. Over two dozen companies claim to have patents covering MPEG-4. MPEG LA licenses patents required for MPEG-4 Part 2 Visual from a wide range of companies (audio is licensed separately) and lists all of its licensors and licensees on the site. New licenses for MPEG-4 System patents are under development and no new licenses are being offered while holders of its old MPEG-4 Systems license are still covered under the terms of that license for the patents listed. The majority of patents used for the MPEG-4 Visual format are held by three Japanese companies: Mitsubishi Electric (255 patents), Hitachi (206 patents), and Panasonic (200 patents).
Technology
File formats
null
20087
https://en.wikipedia.org/wiki/Modular%20arithmetic
Modular arithmetic
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801. A familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Simple addition would result in , but 15:00 reads as 3:00 on the clock face because clocks "wrap around" every 12 hours and the hour number starts again at zero when it reaches 12. We say that 15 is congruent to 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12). Similarly, 8:00 represents a period of 8 hours, and twice this would give 16:00, which reads as 4:00 on the clock face, written as 2 × 8 ≡ 4 (mod 12). Congruence Given an integer , called a modulus, two integers and are said to be congruent modulo , if is a divisor of their difference; that is, if there is an integer such that . Congruence modulo is a congruence relation, meaning that it is an equivalence relation that is compatible with the operations of addition, subtraction, and multiplication. Congruence modulo is denoted . The parentheses mean that applies to the entire equation, not just to the right-hand side (here, ). This notation is not to be confused with the notation (without parentheses), which refers to the modulo operation, the remainder of when divided by : that is, denotes the unique integer such that and . The congruence relation may be rewritten as , explicitly showing its relationship with Euclidean division. However, the here need not be the remainder in the division of by Rather, asserts that and have the same remainder when divided by . That is, , , where is the common remainder. We recover the previous relation () by subtracting these two expressions and setting Because the congruence modulo is defined by the divisibility by and because is a unit in the ring of integers, a number is divisible by exactly if it is divisible by . This means that every non-zero integer may be taken as modulus. Examples In modulus 12, one can assert that: because the difference is , a multiple of . Equivalently, and have the same remainder when divided by . The definition of congruence also applies to negative values. For example: Basic properties The congruence relation satisfies all the conditions of an equivalence relation: Reflexivity: Symmetry: if . Transitivity: If and , then If and , or if , then: for any integer (compatibility with translation) for any integer (compatibility with scaling) for any integer (compatibility with addition) (compatibility with subtraction) (compatibility with multiplication) for any non-negative integer (compatibility with exponentiation) , for any polynomial with integer coefficients (compatibility with polynomial evaluation) If , then it is generally false that . However, the following is true: If where is Euler's totient function, then —provided that is coprime with . For cancellation of common terms, we have the following rules: If , where is any integer, then . If and is coprime with , then . If and , then . The last rule can be used to move modular arithmetic into division. If divides , then . The modular multiplicative inverse is defined by the following rules: Existence: There exists an integer denoted such that if and only if is coprime with . This integer is called a modular multiplicative inverse of modulo . If and exists, then (compatibility with multiplicative inverse, and, if , uniqueness modulo ). If and is coprime to , then the solution to this linear congruence is given by . The multiplicative inverse may be efficiently computed by solving Bézout's equation for , , by using the Extended Euclidean algorithm. In particular, if is a prime number, then is coprime with for every such that ; thus a multiplicative inverse exists for all that is not congruent to zero modulo . Advanced properties Some of the more advanced properties of congruence relations are the following: Fermat's little theorem: If is prime and does not divide , then . Euler's theorem: If and are coprime, then , where is Euler's totient function. A simple consequence of Fermat's little theorem is that if is prime, then is the multiplicative inverse of . More generally, from Euler's theorem, if and are coprime, then . Hence, if , then . Another simple consequence is that if , where is Euler's totient function, then provided is coprime with . Wilson's theorem: is prime if and only if . Chinese remainder theorem: For any , and coprime , , there exists a unique such that and . In fact, where is the inverse of modulo and is the inverse of modulo . Lagrange's theorem: If is prime and is a polynomial with integer coefficients such that is not a divisor of , then the congruence has at most non-congruent solutions. Primitive root modulo : A number is a primitive root modulo if, for every integer coprime to , there is an integer such that . A primitive root modulo exists if and only if is equal to or , where is an odd prime number and is a positive integer. If a primitive root modulo exists, then there are exactly such primitive roots, where is the Euler's totient function. Quadratic residue: An integer is a quadratic residue modulo , if there exists an integer such that . Euler's criterion asserts that, if is an odd prime, and is not a multiple of , then is a quadratic residue modulo if and only if . Congruence classes The congruence relation is an equivalence relation. The equivalence class modulo of an integer is the set of all integers of the form , where is any integer. It is called the congruence class or residue class of modulo , and may be denoted as , or as or when the modulus is known from the context. Each residue class modulo  contains exactly one integer in the range . Thus, these integers are representatives of their respective residue classes. It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes. Consequently, denotes generally the unique integer such that and ; it is called the residue of modulo . In particular, is equivalent to , and this explains why "" is often used instead of "" in this context. Residue systems Each residue class modulo may be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class (since this is the proper remainder which results from division). Any two members of different residue classes modulo are incongruent modulo . Furthermore, every integer belongs to one and only one residue class modulo . The set of integers is called the least residue system modulo . Any set of integers, no two of which are congruent modulo , is called a complete residue system modulo . The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely one representative of each residue class modulo . For example, the least residue system modulo is . Some other complete residue systems modulo include: Some sets that are not complete residue systems modulo 4 are: , since is congruent to modulo . , since a complete residue system modulo must have exactly incongruent residue classes. Reduced residue systems Given the Euler's totient function , any set of integers that are relatively prime to and mutually incongruent under modulus is called a reduced residue system modulo . The set from above, for example, is an instance of a reduced residue system modulo 4. Covering systems Covering systems represent yet another type of residue system that may contain residues with varying moduli. Integers modulo m Remark: In the context of this paragraph, the modulus is almost always taken as positive. The set of all congruence classes modulo is called the ring of integers modulo , and is denoted , , or . The notation is, however, not recommended because it can be confused with the set of -adic integers. The ring is fundamental to various branches of mathematics (see below). For one has When , is the zero ring; when , is not an empty set; rather, it is isomorphic to , since . Addition, subtraction, and multiplication are defined on by the following rules: The properties given before imply that, with these operations, is a commutative ring. For example, in the ring , one has as in the arithmetic for the 24-hour clock. The notation is used because this ring is the quotient ring of by the ideal , the set formed by all with Considered as a group under addition, is a cyclic group, and all cyclic groups are isomorphic with for some . The ring of integers modulo is a field if and only if is prime (this ensures that every nonzero element has a multiplicative inverse). If is a prime power with , there exists a unique (up to isomorphism) finite field with elements, which is not isomorphic to , which fails to be a field because it has zero-divisors. If , denotes the multiplicative group of the integers modulo that are invertible. It consists of the congruence classes , where is coprime to ; these are precisely the classes possessing a multiplicative inverse. They form an abelian group under multiplication; its order is , where is Euler's totient function Applications In pure mathematics, modular arithmetic is one of the foundations of number theory, touching on almost every aspect of its study, and it is also used extensively in group theory, ring theory, knot theory, and abstract algebra. In applied mathematics, it is used in computer algebra, cryptography, computer science, chemistry and the visual and musical arts. A very practical application is to calculate checksums within serial number identifiers. For example, International Standard Book Number (ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise, International Bank Account Numbers (IBANs), for example, make use of modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of the CAS registry number (a unique identifying number for each chemical compound) is a check digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10. In cryptography, modular arithmetic directly underpins public key systems such as RSA and Diffie–Hellman, and provides finite fields which underlie elliptic curves, and is used in a variety of symmetric key algorithms including Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and RC4. RSA and Diffie–Hellman use modular exponentiation. In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used in polynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations of polynomial greatest common divisor, exact linear algebra and Gröbner basis algorithms over the integers and the rational numbers. As posted on Fidonet in the 1980s and archived at Rosetta Code, modular arithmetic was used to disprove Euler's sum of powers conjecture on a Sinclair QL microcomputer using just one-fourth of the integer precision used by a CDC 6600 supercomputer to disprove it two decades earlier via a brute force search. In computer science, modular arithmetic is often applied in bitwise operations and other operations involving fixed-width, cyclic data structures. The modulo operation, as implemented in many programming languages and calculators, is an application of modular arithmetic that is often used in this context. The logical operator XOR sums 2 bits, modulo 2. The use of long division to turn a fraction into a repeating decimal in any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10. In music, arithmetic modulo 12 is used in the consideration of the system of twelve-tone equal temperament, where octave and enharmonic equivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharp is considered the same as D-flat). The method of casting out nines offers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular, Zeller's congruence and the Doomsday algorithm make heavy use of modulo-7 arithmetic. More generally, modular arithmetic also has application in disciplines such as law (e.g., apportionment), economics (e.g., game theory) and other areas of the social sciences, where proportional division and allocation of resources plays a central part of the analysis. Computational complexity Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved in polynomial time with a form of Gaussian elimination, for details see linear congruence theorem. Algorithms, such as Montgomery reduction, also exist to allow simple arithmetic operations, such as multiplication and exponentiation modulo , to be performed efficiently on large numbers. Some operations, like finding a discrete logarithm or a quadratic congruence appear to be as hard as integer factorization and thus are a starting point for cryptographic algorithms and encryption. These problems might be NP-intermediate. Solving a system of non-linear modular arithmetic equations is NP-complete.
Mathematics
Basics
null
20097
https://en.wikipedia.org/wiki/Microwave
Microwave
Microwave is a form of electromagnetic radiation with wavelengths shorter than other radio waves but longer than infrared waves. Its wavelength ranges from about one meter to one millimeter, corresponding to frequencies between 300 MHz and 300 GHz, broadly construed. A more common definition in radio-frequency engineering is the range between 1 and 100 GHz (wavelengths between 30 cm and 3 mm), or between 1 and 3000 GHz (30 cm and 0.1 mm). The prefix in microwave is not meant to suggest a wavelength in the micrometer range; rather, it indicates that microwaves are small (having shorter wavelengths), compared to the radio waves used in prior radio technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency (UHF) are fairly arbitrary and are used variously between different fields of study. In all cases, microwaves include the entire super high frequency (SHF) band (3 to 30 GHz, or 10 to 1 cm) at minimum. A broader definition includes UHF and extremely high frequency (EHF) (millimeter wave; 30 to 300 GHz) bands as well. Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations. Microwaves travel by line-of-sight; unlike lower frequency radio waves, they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about . At the high end of the band, they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer. Microwaves are widely used in modern technology, for example in point-to-point communication links, wireless networks, microwave radio relay networks, radar, satellite and spacecraft communication, medical diathermy and cancer treatment, remote sensing, radio astronomy, particle accelerators, spectroscopy, industrial heating, collision avoidance systems, garage door openers and keyless entry systems, and for cooking food in microwave ovens. Electromagnetic spectrum Microwaves occupy a place in the electromagnetic spectrum with frequency above ordinary radio waves, and below infrared light: In descriptions of the electromagnetic spectrum, some sources classify microwaves as radio waves, a subset of the radio wave band, while others classify microwaves and radio waves as distinct types of radiation. This is an arbitrary distinction. Frequency bands Bands of frequencies in the microwave spectrum are designated by letters. Unfortunately, there are several incompatible band designation systems, and even within a system the frequency ranges corresponding to some of the letters vary somewhat between different application fields. The letter system had its origin in World War 2 in a top-secret U.S. classification of bands used in radar sets; this is the origin of the oldest letter system, the IEEE radar bands. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below: Other definitions exist. The term P band is sometimes used for UHF frequencies below the L band but is now obsolete per IEEE Std 521. When radars were first developed at K band during World War 2, it was not known that there was a nearby absorption band (due to water vapor and oxygen in the atmosphere). To avoid this problem, the original K band was split into a lower band, Ku, and upper band, Ka. Propagation Microwaves travel solely by line-of-sight paths; unlike lower frequency radio waves, they do not travel as ground waves which follow the contour of the Earth, or reflect off the ionosphere (skywaves). Although at the low end of the band, they can pass through building walls enough for useful reception, usually rights of way cleared to the first Fresnel zone are required. Therefore, on the surface of the Earth, microwave communication links are limited by the visual horizon to about . Microwaves are absorbed by moisture in the atmosphere, and the attenuation increases with frequency, becoming a significant factor (rain fade) at the high end of the band. Beginning at about 40 GHz, atmospheric gases also begin to absorb microwaves, so above this frequency microwave transmission is limited to a few kilometers. A spectral band structure causes absorption peaks at specific frequencies (see graph at right). Above 100 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is so effective that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges. Troposcatter In a microwave beam directed at an angle into the sky, a small amount of the power will be randomly scattered as the beam passes through the troposphere. A sensitive receiver beyond the horizon with a high gain antenna focused on that area of the troposphere can pick up the signal. This technique has been used at frequencies between 0.45 and 5 GHz in tropospheric scatter (troposcatter) communication systems to communicate beyond the horizon, at distances up to 300 km. Antennas The short wavelengths of microwaves allow omnidirectional antennas for portable devices to be made very small, from 1 to 20 centimeters long, so microwave frequencies are widely used for wireless devices such as cell phones, cordless phones, and wireless LANs (Wi-Fi) access for laptops, and Bluetooth earphones. Antennas used include short whip antennas, rubber ducky antennas, sleeve dipoles, patch antennas, and increasingly the printed circuit inverted F antenna (PIFA) used in cell phones. Their short wavelength also allows narrow beams of microwaves to be produced by conveniently small high gain antennas from a half meter to 5 meters in diameter. Therefore, beams of microwaves are used for point-to-point communication links, and for radar. An advantage of narrow beams is that they do not interfere with nearby equipment using the same frequency, allowing frequency reuse by nearby transmitters. Parabolic ("dish") antennas are the most widely used directive antennas at microwave frequencies, but horn antennas, slot antennas and lens antennas are also used. Flat microstrip antennas are being increasingly used in consumer devices. Another directive antenna practical at microwave frequencies is the phased array, a computer-controlled array of antennas that produces a beam that can be electronically steered in different directions. At microwave frequencies, the transmission lines which are used to carry lower frequency radio waves to and from antennas, such as coaxial cable and parallel wire lines, have excessive power losses, so when low attenuation is required, microwaves are carried by metal pipes called waveguides. Due to the high cost and maintenance requirements of waveguide runs, in many microwave antennas the output stage of the transmitter or the RF front end of the receiver is located at the antenna. Design and analysis The term microwave also has a more technical meaning in electromagnetics and circuit theory. Apparatus and techniques may be described qualitatively as "microwave" when the wavelengths of signals are roughly the same as the dimensions of the circuit, so that lumped-element circuit theory is inaccurate, and instead distributed circuit elements and transmission-line theory are more useful methods for design and analysis. As a consequence, practical microwave circuits tend to move away from the discrete resistors, capacitors, and inductors used with lower-frequency radio waves. Open-wire and coaxial transmission lines used at lower frequencies are replaced by waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant stubs. In turn, at even higher frequencies, where the wavelength of the electromagnetic waves becomes small in comparison to the size of the structures used to process them, microwave techniques become inadequate, and the methods of optics are used. Sources High-power microwave sources use specialized vacuum tubes to generate microwaves. These devices operate on different principles from low-frequency vacuum tubes, using the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields, and include the magnetron (used in microwave ovens), klystron, traveling-wave tube (TWT), and gyrotron. These devices work in the density modulated mode, rather than the current modulated mode. This means that they work on the basis of clumps of electrons flying ballistically through them, rather than using a continuous stream of electrons. Low-power microwave sources use solid-state devices such as the field-effect transistor (at least at lower frequencies), tunnel diodes, Gunn diodes, and IMPATT diodes. Low-power sources are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats. A maser is a solid-state device that amplifies microwaves using similar principles to the laser, which amplifies higher-frequency light waves. All warm objects emit low level microwave black-body radiation, depending on their temperature, so in meteorology and remote sensing, microwave radiometers are used to measure the temperature of objects or terrain. The sun and other astronomical radio sources such as Cassiopeia A emit low level microwave radiation which carries information about their makeup, which is studied by radio astronomers using receivers called radio telescopes. The cosmic microwave background radiation (CMBR), for example, is a weak microwave noise filling empty space which is a major source of information on cosmology's Big Bang theory of the origin of the Universe. Applications Microwave technology is extensively used for point-to-point telecommunications (i.e., non-broadcast uses). Microwaves are especially suitable for this use since they are more easily focused into narrower beams than radio waves, allowing frequency reuse; their comparatively higher frequencies allow broad bandwidth and high data transmission rates, and antenna sizes are smaller than at lower frequencies because antenna size is inversely proportional to the transmitted frequency. Microwaves are used in spacecraft communication, and much of the world's data, TV, and telephone communications are transmitted long distances by microwaves between ground stations and communications satellites. Microwaves are also employed in microwave ovens and in radar technology. Communication Before the advent of fiber-optic transmission, most long-distance telephone calls were carried via networks of microwave radio relay links run by carriers such as AT&T Long Lines. Starting in the early 1950s, frequency-division multiplexing was used to send up to 5,400 telephone channels on each microwave radio channel, with as many as ten radio channels combined into one antenna for the hop to the next site, up to 70 km away. Wireless LAN protocols, such as Bluetooth and the IEEE 802.11 specifications used for Wi-Fi, also use microwaves in the 2.4 GHz ISM band, although 802.11a uses ISM band and U-NII frequencies in the 5 GHz range. Licensed long-range (up to about 25 km) Wireless Internet Access services have been used for almost a decade in many countries in the 3.5–4.0 GHz range. The FCC recently carved out spectrum for carriers that wish to offer services in this range in the U.S. — with emphasis on 3.65 GHz. Dozens of service providers across the country are securing or have already received licenses from the FCC to operate in this band. The WIMAX service offerings that can be carried on the 3.65 GHz band will give business customers another option for connectivity. Metropolitan area network (MAN) protocols, such as WiMAX (Worldwide Interoperability for Microwave Access) are based on standards such as IEEE 802.16, designed to operate between 2 and 11 GHz. Commercial implementations are in the 2.3 GHz, 2.5 GHz, 3.5 GHz and 5.8 GHz ranges. Mobile Broadband Wireless Access (MBWA) protocols based on standards specifications such as IEEE 802.20 or ATIS/ANSI HC-SDMA (such as iBurst) operate between 1.6 and 2.3 GHz to give mobility and in-building penetration characteristics similar to mobile phones but with vastly greater spectral efficiency. Some mobile phone networks, like GSM, use the low-microwave/high-UHF frequencies around 1.8 and 1.9 GHz in the Americas and elsewhere, respectively. DVB-SH and S-DMB use 1.452 to 1.492 GHz, while proprietary/incompatible satellite radio in the U.S. uses around 2.3 GHz for DARS. Microwave radio is used in point-to-point telecommunications transmissions because, due to their short wavelength, highly directional antennas are smaller and therefore more practical than they would be at longer wavelengths (lower frequencies). There is also more bandwidth in the microwave spectrum than in the rest of the radio spectrum; the usable bandwidth below 300 MHz is less than 300 MHz while many GHz can be used above 300 MHz. Typically, microwaves are used in remote broadcasting of news or sports events as the backhaul link to transmit a signal from a remote location to a television station from a specially equipped van. See broadcast auxiliary service (BAS), remote pickup unit (RPU), and studio/transmitter link (STL). Most satellite communications systems operate in the C, X, Ka, or Ku bands of the microwave spectrum. These frequencies allow large bandwidth while avoiding the crowded UHF frequencies and staying below the atmospheric absorption of EHF frequencies. Satellite TV either operates in the C band for the traditional large dish fixed satellite service or Ku band for direct-broadcast satellite. Military communications run primarily over X or Ku-band links, with Ka band being used for Milstar. Navigation Global Navigation Satellite Systems (GNSS) including the Chinese Beidou, the American Global Positioning System (introduced in 1978) and the Russian GLONASS broadcast navigational signals in various bands between about 1.2 GHz and 1.6 GHz. Radar Radar is a radiolocation technique in which a beam of radio waves emitted by a transmitter bounces off an object and returns to a receiver, allowing the location, range, speed, and other characteristics of the object to be determined. The short wavelength of microwaves causes large reflections from objects the size of motor vehicles, ships and aircraft. Also, at these wavelengths, the high gain antennas such as parabolic antennas which are required to produce the narrow beamwidths needed to accurately locate objects are conveniently small, allowing them to be rapidly turned to scan for objects. Therefore, microwave frequencies are the main frequencies used in radar. Microwave radar is widely used for applications such as air traffic control, weather forecasting, navigation of ships, and speed limit enforcement. Long-distance radars use the lower microwave frequencies since at the upper end of the band atmospheric absorption limits the range, but millimeter waves are used for short-range radar such as collision avoidance systems. Radio astronomy Microwaves emitted by astronomical radio sources; planets, stars, galaxies, and nebulas are studied in radio astronomy with large dish antennas called radio telescopes. In addition to receiving naturally occurring microwave radiation, radio telescopes have been used in active radar experiments to bounce microwaves off planets in the solar system, to determine the distance to the Moon or map the invisible surface of Venus through cloud cover. A recently completed microwave radio telescope is the Atacama Large Millimeter Array, located at more than 5,000 meters (16,597 ft) altitude in Chile, which observes the universe in the millimeter and submillimeter wavelength ranges. The world's largest ground-based astronomy project to date, it consists of more than 66 dishes and was built in an international collaboration by Europe, North America, East Asia and Chile. A major recent focus of microwave radio astronomy has been mapping the cosmic microwave background radiation (CMBR) discovered in 1964 by radio astronomers Arno Penzias and Robert Wilson. This faint background radiation, which fills the universe and is almost the same in all directions, is "relic radiation" from the Big Bang, and is one of the few sources of information about conditions in the early universe. Due to the expansion and thus cooling of the Universe, the originally high-energy radiation has been shifted into the microwave region of the radio spectrum. Sufficiently sensitive radio telescopes can detect the CMBR as a faint signal that is not associated with any star, galaxy, or other object. Heating and power application A microwave oven passes microwave radiation at a frequency near through food, causing dielectric heating primarily by absorption of the energy in water. Microwave ovens became common kitchen appliances in Western countries in the late 1970s, following the development of less expensive cavity magnetrons. Water in the liquid state possesses many molecular interactions that broaden the absorption peak. In the vapor phase, isolated water molecules absorb at around 22 GHz, almost ten times the frequency of the microwave oven. Microwave heating is used in industrial processes for drying and curing products. Many semiconductor processing techniques use microwaves to generate plasma for such purposes as reactive ion etching and plasma-enhanced chemical vapor deposition (PECVD). Microwaves are used in stellarators and tokamak experimental fusion reactors to help break down the gas into a plasma and heat it to very high temperatures. The frequency is tuned to the cyclotron resonance of the electrons in the magnetic field, anywhere between 2–200 GHz, hence it is often referred to as Electron Cyclotron Resonance Heating (ECRH). The upcoming ITER thermonuclear reactor will use up to 20 MW of 170 GHz microwaves. Microwaves can be used to transmit power over long distances, and post-World War 2 research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using solar power satellite (SPS) systems with large solar arrays that would beam power down to the Earth's surface via microwaves. Less-than-lethal weaponry exists that uses millimeter waves to heat a thin layer of human skin to an intolerable temperature so as to make the targeted person move away. A two-second burst of the 95 GHz focused beam heats the skin to a temperature of at a depth of . The United States Air Force and Marines are currently using this type of active denial system in fixed installations. Spectroscopy Microwave radiation is used in electron paramagnetic resonance (EPR or ESR) spectroscopy, typically in the X-band region (~9 GHz) in conjunction typically with magnetic fields of 0.3 T. This technique provides information on unpaired electrons in chemical systems, such as free radicals or transition metal ions such as Cu(II). Microwave radiation is also used to perform rotational spectroscopy and can be combined with electrochemistry as in microwave enhanced electrochemistry. Frequency measurement Microwave frequency can be measured by either electronic or mechanical techniques. Frequency counters or high frequency heterodyne systems can be used. Here the unknown frequency is compared with harmonics of a known lower frequency by use of a low-frequency generator, a harmonic generator and a mixer. The accuracy of the measurement is limited by the accuracy and stability of the reference source. Mechanical methods require a tunable resonator such as an absorption wavemeter, which has a known relation between a physical dimension and frequency. In a laboratory setting, Lecher lines can be used to directly measure the wavelength on a transmission line made of parallel wires, the frequency can then be calculated. A similar technique is to use a slotted waveguide or slotted coaxial line to directly measure the wavelength. These devices consist of a probe introduced into the line through a longitudinal slot so that the probe is free to travel up and down the line. Slotted lines are primarily intended for measurement of the voltage standing wave ratio on the line. However, provided a standing wave is present, they may also be used to measure the distance between the nodes, which is equal to half the wavelength. The precision of this method is limited by the determination of the nodal locations. Effects on health Microwaves are non-ionizing radiation, which means that microwave photons do not contain sufficient energy to ionize molecules or break chemical bonds, or cause DNA damage, as ionizing radiation such as x-rays or ultraviolet can. The word "radiation" refers to energy radiating from a source and not to radioactivity. The main effect of absorption of microwaves is to heat materials; the electromagnetic fields cause polar molecules to vibrate. It has not been shown conclusively that microwaves (or other non-ionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect. During World War II, it was observed that individuals in the radiation path of radar installations experienced clicks and buzzing sounds in response to microwave radiation. Research by NASA in the 1970s has shown this to be caused by thermal expansion in parts of the inner ear. In 1955, Dr. James Lovelock was able to reanimate rats chilled to using microwave diathermy. When injury from exposure to microwaves occurs, it usually results from dielectric heating induced in the body. The lens and cornea of the eye are especially vulnerable because they contain no blood vessels that can carry away heat. Exposure to microwave radiation can produce cataracts by this mechanism, because the microwave heating denatures proteins in the crystalline lens of the eye (in the same way that heat turns egg whites white and opaque). Exposure to heavy doses of microwave radiation (as from an oven that has been tampered with to allow operation even with the door open) can produce heat damage in other tissues as well, up to and including serious burns that may not be immediately evident because of the tendency for microwaves to heat deeper tissues with higher moisture content. History Hertzian optics Microwaves were first generated in the 1890s in some of the earliest radio wave experiments by physicists who thought of them as a form of "invisible light". James Clerk Maxwell in his 1873 theory of electromagnetism, now called Maxwell's equations, had predicted that a coupled electric field and magnetic field could travel through space as an electromagnetic wave, and proposed that light consisted of electromagnetic waves of short wavelength. In 1888, German physicist Heinrich Hertz was the first to demonstrate the existence of electromagnetic waves, generating radio waves using a primitive spark gap radio transmitter. Hertz and the other early radio researchers were interested in exploring the similarities between radio waves and light waves, to test Maxwell's theory. They concentrated on producing short wavelength radio waves in the UHF and microwave ranges, with which they could duplicate classic optics experiments in their laboratories, using quasioptical components such as prisms and lenses made of paraffin, sulfur and pitch and wire diffraction gratings, to refract and diffract radio waves like light rays. Hertz produced waves up to 450 MHz; his directional 450 MHz transmitter consisted of a 26 cm brass rod dipole antenna with a spark gap between the ends, suspended at the focal line of a parabolic antenna made of a curved zinc sheet, powered by high voltage pulses from an induction coil. His historic experiments demonstrated that radio waves like light exhibited refraction, diffraction, polarization, interference and standing waves, proving that radio waves and light waves were both forms of Maxwell's electromagnetic waves. Beginning in 1894 Indian physicist Jagadish Chandra Bose performed the first experiments with microwaves. He was the first person to produce millimeter waves, generating frequencies up to 60 GHz (5 millimeter) using a 3 mm metal ball spark oscillator. Bose also invented waveguide, horn antennas, and semiconductor crystal detectors for use in his experiments. Independently in 1894, Oliver Lodge and Augusto Righi experimented with 1.5 and 12 GHz microwaves respectively, generated by small metal ball spark resonators. Russian physicist Pyotr Lebedev in 1895 generated 50 GHz millimeter waves. In 1897 Lord Rayleigh solved the mathematical boundary-value problem of electromagnetic waves propagating through conducting tubes and dielectric rods of arbitrary shape which gave the modes and cutoff frequency of microwaves propagating through a waveguide. However, since microwaves were limited to line-of-sight paths, they could not communicate beyond the visual horizon, and the low power of the spark transmitters then in use limited their practical range to a few miles. The subsequent development of radio communication after 1896 employed lower frequencies, which could travel beyond the horizon as ground waves and by reflecting off the ionosphere as skywaves, and microwave frequencies were not further explored at this time. First microwave communication experiments Practical use of microwave frequencies did not occur until the 1940s and 1950s due to a lack of adequate sources, since the triode vacuum tube (valve) electronic oscillator used in radio transmitters could not produce frequencies above a few hundred megahertz due to excessive electron transit time and interelectrode capacitance. By the 1930s, the first low-power microwave vacuum tubes had been developed using new principles; the Barkhausen–Kurz tube and the split-anode magnetron. These could generate a few watts of power at frequencies up to a few gigahertz and were used in the first experiments in communication with microwaves. In 1931 an Anglo-French consortium headed by Andre C. Clavier demonstrated the first experimental microwave relay link, across the English Channel between Dover, UK and Calais, France. The system transmitted telephony, telegraph and facsimile data over bidirectional 1.7 GHz beams with a power of one-half watt, produced by miniature Barkhausen–Kurz tubes at the focus of metal dishes. A word was needed to distinguish these new shorter wavelengths, which had previously been lumped into the "short wave" band, which meant all waves shorter than 200 meters. The terms quasi-optical waves and ultrashort waves were used briefly but did not catch on. The first usage of the word micro-wave apparently occurred in 1931. Radar development The development of radar, mainly in secrecy, before and during World War II, resulted in the technological advances which made microwaves practical. Microwave wavelengths in the centimeter range were required to give the small radar antennas which were compact enough to fit on aircraft a narrow enough beamwidth to localize enemy aircraft. It was found that conventional transmission lines used to carry radio waves had excessive power losses at microwave frequencies, and George Southworth at Bell Labs and Wilmer Barrow at MIT independently invented waveguide in 1936. Barrow invented the horn antenna in 1938 as a means to efficiently radiate microwaves into or out of a waveguide. In a microwave receiver, a nonlinear component was needed that would act as a detector and mixer at these frequencies, as vacuum tubes had too much capacitance. To fill this need researchers resurrected an obsolete technology, the point contact crystal detector (cat whisker detector) which was used as a demodulator in crystal radios around the turn of the century before vacuum tube receivers. The low capacitance of semiconductor junctions allowed them to function at microwave frequencies. The first modern silicon and germanium diodes were developed as microwave detectors in the 1930s, and the principles of semiconductor physics learned during their development led to semiconductor electronics after the war. The first powerful sources of microwaves were invented at the beginning of World War II: the klystron tube by Russell and Sigurd Varian at Stanford University in 1937, and the cavity magnetron tube by John Randall and Harry Boot at Birmingham University, UK in 1940. Ten centimeter (3 GHz) microwave radar powered by the magnetron tube was in use on British warplanes in late 1941 and proved to be a game changer. Britain's 1940 decision to share its microwave technology with its US ally (the Tizard Mission) significantly shortened the war. The MIT Radiation Laboratory established secretly at Massachusetts Institute of Technology in 1940 to research radar, produced much of the theoretical knowledge necessary to use microwaves. The first microwave relay systems were developed by the Allied military near the end of the war and used for secure battlefield communication networks in the European theater. Post World War II exploitation After World War II, microwaves were rapidly exploited commercially. Due to their high frequency they had a very large information-carrying capacity (bandwidth); a single microwave beam could carry tens of thousands of phone calls. In the 1950s and 60s transcontinental microwave relay networks were built in the US and Europe to exchange telephone calls between cities and distribute television programs. In the new television broadcasting industry, from the 1940s microwave dishes were used to transmit backhaul video feeds from mobile production trucks back to the studio, allowing the first remote TV broadcasts. The first communications satellites were launched in the 1960s, which relayed telephone calls and television between widely separated points on Earth using microwave beams. In 1964, Arno Penzias and Robert Woodrow Wilson while investigating noise in a satellite horn antenna at Bell Labs, Holmdel, New Jersey discovered cosmic microwave background radiation. Microwave radar became the central technology used in air traffic control, maritime navigation, anti-aircraft defense, ballistic missile detection, and later many other uses. Radar and satellite communication motivated the development of modern microwave antennas; the parabolic antenna (the most common type), cassegrain antenna, lens antenna, slot antenna, and phased array. The ability of short waves to quickly heat materials and cook food had been investigated in the 1930s by Ilia E. Mouromtseff at Westinghouse, and at the 1933 Chicago World's Fair demonstrated cooking meals with a 60 MHz radio transmitter. In 1945 Percy Spencer, an engineer working on radar at Raytheon, noticed that microwave radiation from a magnetron oscillator melted a candy bar in his pocket. He investigated cooking with microwaves and invented the microwave oven, consisting of a magnetron feeding microwaves into a closed metal cavity containing food, which was patented by Raytheon on 8 October 1945. Due to their expense microwave ovens were initially used in institutional kitchens, but by 1986 roughly 25% of households in the U.S. owned one. Microwave heating became widely used as an industrial process in industries such as plastics fabrication, and as a medical therapy to kill cancer cells in microwave hyperthermy. The traveling wave tube (TWT) developed in 1943 by Rudolph Kompfner and John Pierce provided a high-power tunable source of microwaves up to 50 GHz and became the most widely used microwave tube (besides the ubiquitous magnetron used in microwave ovens). The gyrotron tube family developed in Russia could produce megawatts of power up into millimeter wave frequencies and is used in industrial heating and plasma research, and to power particle accelerators and nuclear fusion reactors. Solid state microwave devices The development of semiconductor electronics in the 1950s led to the first solid state microwave devices which worked by a new principle; negative resistance (some of the prewar microwave tubes had also used negative resistance). The feedback oscillator and two-port amplifiers which were used at lower frequencies became unstable at microwave frequencies, and negative resistance oscillators and amplifiers based on one-port devices like diodes worked better. The tunnel diode invented in 1957 by Japanese physicist Leo Esaki could produce a few milliwatts of microwave power. Its invention set off a search for better negative resistance semiconductor devices for use as microwave oscillators, resulting in the invention of the IMPATT diode in 1956 by W.T. Read and Ralph L. Johnston and the Gunn diode in 1962 by J. B. Gunn. Diodes are the most widely used microwave sources today. Two low-noise solid state negative resistance microwave amplifiers were developed; the maser invented in 1953 by Charles H. Townes, James P. Gordon, and H. J. Zeiger, and the varactor parametric amplifier developed in 1956 by Marion Hines. The parametric amplifier and the ruby maser, invented in 1958 by a team at Bell Labs headed by H.E.D. Scovil were used for low noise microwave receivers in radio telescopes and satellite ground stations. The maser led to the development of atomic clocks, which keep time using a precise microwave frequency emitted by atoms undergoing an electron transition between two energy levels. Negative resistance amplifier circuits required the invention of new nonreciprocal waveguide components, such as circulators, isolators, and directional couplers. In 1969 Kaneyuki Kurokawa derived mathematical conditions for stability in negative resistance circuits which formed the basis of microwave oscillator design. Microwave integrated circuits Prior to the 1970s microwave devices and circuits were bulky and expensive, so microwave frequencies were generally limited to the output stage of transmitters and the RF front end of receivers, and signals were heterodyned to a lower intermediate frequency for processing. The period from the 1970s to the present has seen the development of tiny inexpensive active solid-state microwave components which can be mounted on circuit boards, allowing circuits to perform significant signal processing at microwave frequencies. This has made possible satellite television, cable television, GPS devices, and modern wireless devices, such as smartphones, Wi-Fi, and Bluetooth which connect to networks using microwaves. Microstrip, a type of transmission line usable at microwave frequencies, was invented with printed circuits in the 1950s. The ability to cheaply fabricate a wide range of shapes on printed circuit boards allowed microstrip versions of capacitors, inductors, resonant stubs, splitters, directional couplers, diplexers, filters and antennas to be made, thus allowing compact microwave circuits to be constructed. Transistors that operated at microwave frequencies were developed in the 1970s. The semiconductor gallium arsenide (GaAs) has a much higher electron mobility than silicon, so devices fabricated with this material can operate at 4 times the frequency of similar devices of silicon. Beginning in the 1970s GaAs was used to make the first microwave transistors, and it has dominated microwave semiconductors ever since. MESFETs (metal-semiconductor field-effect transistors), fast GaAs field effect transistors using Schottky junctions for the gate, were developed starting in 1968 and have reached cutoff frequencies of 100 GHz, and are now the most widely used active microwave devices. Another family of transistors with a higher frequency limit is the HEMT (high electron mobility transistor), a field effect transistor made with two different semiconductors, AlGaAs and GaAs, using heterojunction technology, and the similar HBT (heterojunction bipolar transistor). GaAs can be made semi-insulating, allowing it to be used as a substrate on which circuits containing passive components, as well as transistors, can be fabricated by lithography. By 1976 this led to the first integrated circuits (ICs) which functioned at microwave frequencies, called monolithic microwave integrated circuits (MMIC). The word "monolithic" was added to distinguish these from microstrip PCB circuits, which were called "microwave integrated circuits" (MIC). Since then, silicon MMICs have also been developed. Today MMICs have become the workhorses of both analog and digital high-frequency electronics, enabling the production of single-chip microwave receivers, broadband amplifiers, modems, and microprocessors.
Physical sciences
Electrodynamics
null
20146
https://en.wikipedia.org/wiki/Muon
Muon
A muon ( ; from the Greek letter mu (μ) used to represent it) is an elementary particle similar to the electron, with an electric charge of −1 e and spin-1/2, but with a much greater mass. It is classified as a lepton. As with other leptons, the muon is not thought to be composed of any simpler particles. The muon is an unstable subatomic particle with a mean lifetime of , much longer than many other subatomic particles. As with the decay of the free neutron (with a lifetime around 15 minutes), muon decay is slow (by subatomic standards) because the decay is mediated only by the weak interaction (rather than the more powerful strong interaction or electromagnetic interaction), and because the mass difference between the muon and the set of its decay products is small, providing few kinetic degrees of freedom for decay. Muon decay almost always produces at least three particles, which must include an electron of the same charge as the muon and two types of neutrinos. Like all elementary particles, the muon has a corresponding antiparticle of opposite charge (+1 e) but equal mass and spin: the antimuon (also called a positive muon). Muons are denoted by and antimuons by . Formerly, muons were called mu mesons, but are not classified as mesons by modern particle physicists , and that name is no longer used by the physics community. Muons have a mass of , which is approximately times that of the electron, m. There is also a third lepton, the tau, approximately 17 times heavier than the muon. Due to their greater mass, muons accelerate slower than electrons in electromagnetic fields, and emit less bremsstrahlung (deceleration radiation). This allows muons of a given energy to penetrate far deeper into matter because the deceleration of electrons and muons is primarily due to energy loss by the bremsstrahlung mechanism. For example, so-called secondary muons, created by cosmic rays hitting the atmosphere, can penetrate the atmosphere and reach Earth's land surface and even into deep mines. Because muons have a greater mass and energy than the decay energy of radioactivity, they are not produced by radioactive decay. Nonetheless, they are produced in great amounts in high-energy interactions in normal matter, in certain particle accelerator experiments with hadrons, and in cosmic ray interactions with matter. These interactions usually produce pi mesons initially, which almost always decay to muons. As with the other charged leptons, the muon has an associated muon neutrino, denoted by , which differs from the electron neutrino and participates in different nuclear reactions. History Muons were discovered by Carl D. Anderson and Seth Neddermeyer at Caltech in 1936 while studying cosmic radiation. Anderson noticed particles that curved differently from electrons and other known particles when passed through a magnetic field. They were negatively charged but curved less sharply than electrons, but more sharply than protons, for particles of the same velocity. It was assumed that the magnitude of their negative electric charge was equal to that of the electron, and so to account for the difference in curvature, it was supposed that their mass was greater than an electron's but smaller than a proton's. Thus Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for "mid-". The existence of the muon was confirmed in 1937 by J. C. Street and E. C. Stevenson's cloud chamber experiment. A particle with a mass in the meson range had been predicted before the discovery of any mesons, by theorist Hideki Yukawa: It seems natural to modify the theory of Heisenberg and Fermi in the following way. The transition of a heavy particle from neutron state to proton state is not always accompanied by the emission of light particles. The transition is sometimes taken up by another heavy particle. Because of its mass, the mu meson was initially thought to be Yukawa's particle and some scientists, including Niels Bohr, originally named it the yukon. The fact that the mesotron (i.e. the muon) was not Yukawa's particle was established in 1946 by an experiment conducted by Marcello Conversi, Oreste Piccioni, and Ettore Pancini in Rome. In this experiment, which Luis Walter Alvarez called the "start of modern particle physics" in his 1968 Nobel lecture, they showed that the muons from cosmic rays were decaying without being captured by atomic nuclei, contrary to what was expected of the mediator of the nuclear force postulated by Yukawa. Yukawa's predicted particle, the pi meson, was finally identified in 1947 (again from cosmic ray interactions). With two particles now known with the intermediate mass, the more general term meson was adopted to refer to any such particle within the correct mass range between electrons and nucleons. Further, in order to differentiate between the two different types of mesons after the second meson was discovered, the initial mesotron particle was renamed the mu meson (the Greek letter μ [mu] corresponds to m), and the new 1947 meson (Yukawa's particle) was named the pi meson. As more types of mesons were discovered in accelerator experiments later, it was eventually found that the mu meson significantly differed not only from the pi meson (of about the same mass), but also from all other types of mesons. The difference, in part, was that mu mesons did not interact with the nuclear force, as pi mesons did (and were required to do, in Yukawa's theory). Newer mesons also showed evidence of behaving like the pi meson in nuclear interactions, but not like the mu meson. Also, the mu meson's decay products included both a neutrino and an antineutrino, rather than just one or the other, as was observed in the decay of other charged mesons. In the eventual Standard Model of particle physics codified in the 1970s, all mesons other than the mu meson were understood to be hadrons – that is, particles made of quarks – and thus subject to the nuclear force. In the quark model, a meson was no longer defined by mass (for some had been discovered that were very massive – more than nucleons), but instead were particles composed of exactly two quarks (a quark and antiquark), unlike the baryons, which are defined as particles composed of three quarks (protons and neutrons were the lightest baryons). Mu mesons, however, had shown themselves to be fundamental particles (leptons) like electrons, with no quark structure. Thus, mu "mesons" were not mesons at all, in the new sense and use of the term meson used with the quark model of particle structure. With this change in definition, the term mu meson was abandoned, and replaced whenever possible with the modern term muon, making the term "mu meson" only a historical footnote. In the new quark model, other types of mesons sometimes continued to be referred to in shorter terminology (e.g., pion for pi meson), but in the case of the muon, it retained the shorter name and was never again properly referred to by older "mu meson" terminology. The eventual recognition of the muon as a simple "heavy electron", with no role at all in the nuclear interaction, seemed so incongruous and surprising at the time, that Nobel laureate I. I. Rabi famously quipped, "Who ordered that?" In the Rossi–Hall experiment (1941), muons were used to observe the time dilation (or, alternatively, length contraction) predicted by special relativity, for the first time. Muon sources Muons arriving on the Earth's surface are created indirectly as decay products of collisions of cosmic rays with particles of the Earth's atmosphere. When a cosmic ray proton impacts atomic nuclei in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (their preferred decay product), and muon neutrinos. The muons from these high-energy cosmic rays generally continue in about the same direction as the original proton, at a velocity near the speed of light. Although their lifetime without relativistic effects would allow a half-survival distance of only about 456 meters at most (as seen from Earth), the time dilation effect of special relativity (from the viewpoint of the Earth) allows cosmic ray secondary muons to survive the flight to the Earth's surface, since in the Earth frame the muons have a longer half-life due to their velocity. From the viewpoint (inertial frame) of the muon, on the other hand, it is the length contraction effect of special relativity that allows this penetration, since in the muon frame its lifetime is unaffected, but the length contraction causes distances through the atmosphere and Earth to be far shorter than these distances in the Earth rest-frame. Both effects are equally valid ways of explaining the fast muon's unusual survival over distances. Since muons are unusually penetrative of ordinary matter, like neutrinos, they are also detectable deep underground (700 meters at the Soudan 2 detector) and underwater, where they form a major part of the natural background ionizing radiation. Like cosmic rays, as noted, this secondary muon radiation is also directional. The same nuclear reaction described above (i.e. hadron–hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g−2 experiment. Muon decay Muons are unstable elementary particles and are heavier than electrons and neutrinos but lighter than all other matter particles. They decay via the weak interaction. Because leptonic family numbers are conserved in the absence of an extremely unlikely immediate neutrino oscillation, one of the product neutrinos of muon decay must be a muon-type neutrino and the other an electron-type antineutrino (antimuon decay produces the corresponding antiparticles, as detailed below). Because charge must be conserved, one of the products of muon decay is always an electron of the same charge as the muon (a positron if it is a positive muon). Thus all muons decay to at least an electron, and two neutrinos. Sometimes, besides these necessary products, additional other particles that have no net charge and spin of zero (e.g., a pair of photons, or an electron-positron pair), are produced. The dominant muon decay mode (sometimes called the Michel decay after Louis Michel) is the simplest possible: the muon decays to an electron, an electron antineutrino, and a muon neutrino. Antimuons, in mirror fashion, most often decay to the corresponding antiparticles: a positron, an electron neutrino, and a muon antineutrino. In formulaic terms, these two decays are: → + → + The mean lifetime, , of the (positive) muon is . The equality of the muon and antimuon lifetimes has been established to better than one part in 104. Prohibited decays Certain neutrino-less decay modes are kinematically allowed but are, for all practical purposes, forbidden in the Standard Model, even given that neutrinos have mass and oscillate. Examples forbidden by lepton flavour conservation are: → + and → + + . Taking into account neutrino mass, a decay like → + is technically possible in the Standard Model (for example by neutrino oscillation of a virtual muon neutrino into an electron neutrino), but such a decay is extremely unlikely and therefore should be experimentally unobservable. Fewer than one in 1050 muon decays should produce such a decay. Observation of such decay modes would constitute clear evidence for theories beyond the Standard Model. Upper limits for the branching fractions of such decay modes were measured in many experiments starting more than years ago. The current upper limit for the → + branching fraction was measured 2009–2013 in the MEG experiment and is . Theoretical decay rate The muon decay width that follows from Fermi's golden rule has dimension of energy, and must be proportional to the square of the amplitude, and thus the square of Fermi's coupling constant (), with over-all dimension of inverse fourth power of energy. By dimensional analysis, this leads to Sargent's rule of fifth-power dependence on , where , and: is the fraction of the maximum energy transmitted to the electron. The decay distributions of the electron in muon decays have been parameterised using the so-called Michel parameters. The values of these four parameters are predicted unambiguously in the Standard Model of particle physics, thus muon decays represent a good test of the spacetime structure of the weak interaction. No deviation from the Standard Model predictions has yet been found. For the decay of the muon, the expected decay distribution for the Standard Model values of Michel parameters is where is the angle between the muon's polarization vector and the decay-electron momentum vector, and is the fraction of muons that are forward-polarized. Integrating this expression over electron energy gives the angular distribution of the daughter electrons: The electron energy distribution integrated over the polar angle (valid for ) is Because the direction the electron is emitted in (a polar vector) is preferentially aligned opposite the muon spin (an axial vector), the decay is an example of non-conservation of parity by the weak interaction. This is essentially the same experimental signature as used by the original demonstration. More generally in the Standard Model, all charged leptons decay via the weak interaction and likewise violate parity symmetry. Muonic atoms The muon was the first elementary particle discovered that does not appear in ordinary atoms. Negative muon atoms Negative muons can form muonic atoms (previously called mu-mesic atoms), by replacing an electron in ordinary atoms. Muonic hydrogen atoms are much smaller than typical hydrogen atoms because the much larger mass of the muon gives it a much more localized ground-state wavefunction than is observed for the electron. In multi-electron atoms, when only one of the electrons is replaced by a muon, the size of the atom continues to be determined by the other electrons, and the atomic size is nearly unchanged. Nonetheless, in such cases, the orbital of the muon continues to be smaller and far closer to the nucleus than the atomic orbitals of the electrons. Spectroscopic measurements in muonic hydrogen have been used to produce a precise estimate of the proton radius. The results of these measurements diverged from the then accepted value giving rise to the so called proton radius puzzle. Later this puzzle found its resolution when new improved measurements of the proton radius in the electronic hydrogen became available. Muonic helium is created by substituting a muon for one of the electrons in helium-4. The muon orbits much closer to the nucleus, so muonic helium can therefore be regarded like an isotope of helium whose nucleus consists of two neutrons, two protons and a muon, with a single electron outside. Chemically, muonic helium, possessing an unpaired valence electron, can bond with other atoms, and behaves more like a hydrogen atom than an inert helium atom. Muonic heavy hydrogen atoms with a negative muon may undergo nuclear fusion in the process of muon-catalyzed fusion, after the muon may leave the new atom to induce fusion in another hydrogen molecule. This process continues until the negative muon is captured by a helium nucleus, where it remains until it decays. Negative muons bound to conventional atoms can be captured (muon capture) through the weak force by protons in nuclei, in a sort of electron-capture-like process. When this happens, nuclear transmutation results: The proton becomes a neutron and a muon neutrino is emitted. Positive muon atoms A positive muon, when stopped in ordinary matter, cannot be captured by a proton since the two positive charges can only repel. The positive muon is also not attracted to the nucleus of atoms. Instead, it binds a random electron and with this electron forms an exotic atom known as muonium (mu) atom. In this atom, the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the mass of the electron is much smaller than the mass of both the proton and the muon, the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen. Therefore this bound muon-electron pair can be treated to a first approximation as a short-lived "atom" that behaves chemically like the isotopes of hydrogen (protium, deuterium and tritium). Both positive and negative muons can be part of a short-lived pi-mu atom consisting of a muon and an oppositely charged pion. These atoms were observed in the 1970s in experiments at Brookhaven National Laboratory and Fermilab. Anomalous magnetic dipole moment The anomalous magnetic dipole moment is the difference between the experimentally observed value of the magnetic dipole moment and the theoretical value predicted by the Dirac equation. The measurement and prediction of this value is very important in the precision tests of QED. The E821 experiment at Brookhaven and the Muon g-2 experiment at Fermilab studied the precession of the muon spin in a constant external magnetic field as the muons circulated in a confining storage ring. The Muon g-2 collaboration reported in 2021: The prediction for the value of the muon anomalous magnetic moment includes three parts: μSM = μQED + μEW + μhad. The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED. Muon g−2, a new experiment at Fermilab using the E821 magnet improved the precision of this measurement. In 2020 an international team of 170 physicists calculated the most accurate prediction for the theoretical value of the muon's anomalous magnetic moment. Muon g−2 Muon g-2 is a particle physics experiment at Fermilab to measure the anomalous magnetic dipole moment of a muon to a precision of 0.14 ppm, which is a sensitive test of the Standard Model. It might also provide evidence of the existence of entirely new particles. In 2021, the Muon g−2 Experiment presented their first results of a new experimental average that increased the difference between experiment and theory to 4.2 standard deviations. Electric dipole moment The current experimental limit on the muon electric dipole moment, |dμ| < 1.9 × 10−19 e·cm set by the E821 experiment at the Brookhaven, is orders of magnitude above the Standard Model prediction. The observation of a non-zero muon electric dipole moment would provide an additional source of CP violation. An improvement in sensitivity by two orders of magnitude over the Brookhaven limit is expected from the experiments at Fermilab. Muon radiography and tomography Since muons are much more deeply penetrating than X-rays or gamma rays, muon imaging can be used with much thicker material or, with cosmic ray sources, larger objects. One example is commercial muon tomography used to image entire cargo containers to detect shielded nuclear material, as well as explosives or other contraband. The technique of muon transmission radiography based on cosmic ray sources was first used in the 1950s to measure the depth of the overburden of a tunnel in Australia and in the 1960s to search for possible hidden chambers in the Pyramid of Chephren in Giza. In 2017, the discovery of a large void (with a length of 30 metres minimum) by observation of cosmic-ray muons was reported. In 2003, the scientists at Los Alamos National Laboratory developed a new imaging technique: muon scattering tomography. With muon scattering tomography, both incoming and outgoing trajectories for each particle are reconstructed, such as with sealed aluminum drift tubes. Since the development of this technique, several companies have started to use it. In August 2014, Decision Sciences International Corporation announced it had been awarded a contract by Toshiba for use of its muon tracking detectors in reclaiming the Fukushima nuclear complex. The Fukushima Daiichi Tracker was proposed to make a few months of muon measurements to show the distribution of the reactor cores. In December 2014, Tepco reported that they would be using two different muon imaging techniques at Fukushima, "muon scanning method" on Unit 1 (the most badly damaged, where the fuel may have left the reactor vessel) and "muon scattering method" on Unit 2. The International Research Institute for Nuclear Decommissioning IRID in Japan and the High Energy Accelerator Research Organization KEK call the method they developed for Unit 1 the "muon permeation method"; 1,200 optical fibers for wavelength conversion light up when muons come into contact with them. After a month of data collection, it is hoped to reveal the location and amount of fuel debris still inside the reactor. The measurements began in February 2015.
Physical sciences
Fermions
null
20162
https://en.wikipedia.org/wiki/Mammoth
Mammoth
A mammoth is any species of the extinct elephantid genus Mammuthus. They lived from the late Miocene epoch (from around 6.2 million years ago) into the Holocene until about 4,000 years ago, with mammoth species at various times inhabiting Africa, Asia, Europe, and North America. Mammoths are distinguished from living elephants by their (typically large) spirally twisted tusks and in at least some later species, the development of numerous adaptions to living in cold environments, including a thick layer of fur. Mammoths and Asian elephants are more closely related to each other than they are to African elephants. The oldest mammoth representative, Mammuthus subplanifrons, appeared around 6 million years ago during the late Miocene in what is now southern and Eastern Africa. Later in the Pliocene, by about three million years ago, mammoths dispersed into Eurasia, eventually covering most of Eurasia before migrating into North America around 1.5–1.3 million years ago, becoming ancestral to the Columbian mammoth (M. columbi). The woolly mammoth (M. primigenius) evolved about 700–400,000 years ago in Siberia, with some surviving on Russia's Wrangel Island in the Arctic Ocean until as recently as 4,000 years ago, still extant during the existence of the earliest civilisations in ancient Egypt and Mesopotamia. Etymology and early observations According to The American Heritage Dictionary, the word "mammoth" likely originates from *mān-oŋt, a word in the Mansi languages of western Siberia meaning "earth horn", in reference to mammoth tusks. Mammoths appear in the folkore of the indigenous people of Siberia, who were impressed by the great size of their remains. In the mythology of the Evenk people, mammoths were responsible for the creation of the world, digging up the land from the ocean floor with their tusks. The Selkup believed that mammoths lived underground and guarded the underworld, while the Nenets and the Mansi (the latter of whom, along with the Khanty, conceived mammoths as giant birds) believed that mammoths were responsible for the creation of mountains and lakes, while the Yakuts regarded mammoths as water spirits. The word mammoth was first used in Europe during the early 17th century, when referring to maimanto tusks discovered in Siberia, as recorded in the 1618 edition of the Dictionariolum Russico-Anglicum. The earliest scientific research paper on mammoths was by Vasily Tatishchev in 1725. John Bell, who was on the Ob River in 1722, said that mammoth tusks were well known in the area. They were called "mammon's horn" and were often found in washed-out river banks. Bell bought one and presented it to Hans Sloan who pronounced it an elephant's tooth. In the American colonies around 1725, enslaved Africans digging in the vicinity of the Stono River in South Carolina unearthed molar teeth recognised in modern times to belong to Columbian mammoths, with the remains subsequently examined by the British naturalist Mark Catesby, who visited the site, and later published an account of his visit in 1843. While the slave owners were puzzled by the objects and suggested that they originated from the great flood described in the Bible, Catesby noted that the slaves unanimously agreed that the objects were the teeth of elephants similar to those from their African homeland, to which Catesby concurred, marking the first technical identification of any fossil animal in North America. In 1796, French biologist Georges Cuvier was the first to identify woolly mammoth remains not as modern elephants transported to the Arctic, but as an entirely new species. He argued this species had gone extinct and no longer existed, a concept that was not widely accepted at the time. Following Cuvier's identification, German naturalist Johann Friedrich Blumenbach gave the woolly mammoth its scientific name, Elephas primigenius, in 1799, placing it in the Elephas, the genus which today contains the Asian elephant (Elephas maximus). Originally the African elephants, as well as the American mastodon (described in 1792) were also placed in Elephas. Cuvier coined the synonym Elephas mammonteus for the woolly mammoth a few months later, but E. primigenius became the widely used name for the species, including by Cuvier. The genus name Mammuthus was coined by British anatomist Joshua Brookes in 1828, as part of a survey of his museum collection. Thomas Jefferson, who famously had a keen interest in paleontology, is partially responsible for transforming the word mammoth from a noun describing the prehistoric elephant to an adjective describing anything of surprisingly large size. The first recorded use of the word as an adjective was in a description of a large wheel of cheese (the "Cheshire Mammoth Cheese") given to Jefferson in 1802. Evolution The earliest known proboscideans, the clade that contains the elephants, arose about 55 million years ago on the landmass of Afro-Arabia. The closest relatives of the Proboscidea are the sirenians and the hyraxes. The family Elephantidae arose by million years ago in Africa, and includes the living elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved. Following the publication of the woolly mammoths mitochondrial genome sequence in 1997, it has since become widely accepted that mammoths and Asian elephants share a closer relationship to each other than either do to African elephants. The following cladogram shows the placement of the genus Mammuthus among other proboscideans, based on hyoid characteristics and genetics: It is possible to reconstruct the evolutionary history of the genus through morphological studies. Mammoth species can be identified from the number of enamel ridges/lamellae on their molars; the primitive species had few ridges, and the amount increased gradually as new species evolved and replaced the former ones. At the same time, the crowns of the teeth became longer, and the skulls became higher from top to bottom and shorter from the back to the front over time to accommodate this. The earliest mammoths, assigned to the species Mammuthus subplanifrons, are known from southern and eastern Africa, with the earliest records dating to the Late Miocene, around 6.2–5.3 million years ago. By the Late Pliocene, mammoths had become confined to the northern portions of the African continent with remains from this time assigned to Mammuthus africanavus. During the Late Pliocene, by 3.2 million years ago, mammoths dispersed into Eurasia via the Sinai Peninsula. The earliest mammoths in Eurasia are assigned to the species Mammuthus rumanus. The youngest remains of mammoths in Africa are from Aïn Boucherit, Algeria dating to the Early Pleistocene, around 2.3–2 million years ago (with a possible later record from Aïn Hanech, Algeria, dating to 1.95–1.78 million years ago). Mammuthus rumanus is thought to be the ancestor of Mammuthus meridionalis, which first appeared at the beginning of the Pleistocene, around 2.6 million years ago. Mammuthus meridionalis subsequently gave rise to Mammuthus trogontherii (the steppe mammoth) in Eastern Asia around 1.7 million years ago. Around 1.5–1.3 million years ago, M. trogontherii crossed the Bering Land Bridge into North America, becoming ancestral to Mammuthus columbi (the Columbian mammoth). At the end of the Early Pleistocene Mammuthus trogontherii migrated into Europe, replacing M. meridionalis around 1–0.8 million years ago. Mammuthus primigenius (the woolly mammoth) had evolved from M. trogontherii in Siberia by around 600,000–500,000 years ago, replacing M. trogontherii in Europe by around 200,000 years ago, and migrated into North America during the Late Pleistocene. A number of dwarf mammoth species, with small body sizes, evolved on islands as a result of insular dwarfism. These include Mammuthus lamarmorai on Sardinia (late Middle-Late Pleistocene), Mammuthus exilis on the Channel Islands of California (Late Pleistocene), and Mammuthus creticus on Crete (Early Pleistocene). Description Like living elephants, mammoths typically had large body sizes. The largest known species like Mammuthus meridionalis and Mammuthus trogontherii (the steppe mammoth) were considerably larger than modern elephants, with mature adult males having an average height of approximately at the shoulder and weights of , while exceptionally large males may have reached at the shoulder and in weight. However, woolly mammoths were considerably smaller, only about as large as modern African bush elephants with males around high at the shoulder, and in weight on average, with the largest recorded individuals being around tall and in weight. The insular dwarf mammoth species were considerably smaller, with the smallest species M. creticus estimated to have a shoulder height of only around and a weight of about , making it one of the smallest elephantids known. The number of lamellae (ridge-like structures) on the molars, particularly on the third molars, substantially increased over the course of mammoth evolution. The earliest Eurasian species M. rumanus have around 8-10 lamellae on the third molars, while Late Pleistocene woolly mammoths have 20-28 lamellae on the third molars. These changes also corresponded with reduced enamel thickness and increasing tooth height (hypsodonty). These changes are thought to be adaptations to increasing abrasion resulting from the shift in the diet of mammoths from a browsing based diet in M. rumanus, towards a grazing diet in later species. Both sexes bore tusks. A first, small set appeared at about the age of six months, and these were replaced at about 18 months by the permanent set. Growth of the permanent set was at a rate of about per year. The tusks display a strong spiral twisting. Mammoth tusks are among the largest known among proboscideans with some specimens over in length and likely in weight with some historical reports suggesting tusks of Columbian mammoths could reach lengths of around substantially surpassing the largest known modern elephant tusks. The heads of mammoths were prominently domed. The first several thoracic vertebrae of mammoths typically had long neural spines. The back was typically sloping, with the body being wider than that of African elephants. The tails of mammoths were relatively short compared to living elephants.While early mammoth species like M. meridionalis were probably relatively hairless, similar to modern elephants, M. primigenius and likely M. trogontherii had a substantial coat of fur, among other physiological adaptations for living in cold environments. Genetic sequencing of M. trogontherii-like mammoths, over 1 million years old from Siberia suggests that they had already developed many of the genetic changes found in woolly mammoths responsible for tolerance of cold conditions. Scientists discovered and studied the remains of a mammoth calf, and found that fat greatly influenced its form, and enabled it to store large amounts of nutrients necessary for survival in temperatures as low as . The fat also allowed the mammoths to increase their muscle mass, allowing the mammoths to fight against enemies and live longer. Woolly mammoths evolved a suite of adaptations for arctic life, including morphological traits such as small ears and tails to minimize heat loss, a thick layer of subcutaneous fat, and numerous sebaceous glands for insulation, as well as a large brown-fat hump like deposit behind the neck that may have functioned as a heat source and fat reservoir during winter. Behaviour and palaeoecology Based on studies of their close relatives, the modern elephants, mammoths probably had a gestation period of 22 months, resulting in a single calf being born. Their social structure was probably the same as that of living elephants, with females and juveniles living in herds headed by a matriarch, whilst bulls lived solitary lives or formed loose groups after sexual maturity, with analysis of testoterone levels in tusks indicating that adult males experienced periods of musth like modern elephants, where they entered a state of heightened aggression. The earliest mammoth species like M. subplanifrons and M. rumanus were mixed feeders (both browsing and grazing) to browsers. Over the course of mammoth evolution in Eurasia, their diet shifted towards mixed feeding-grazing in M. trogontherii, culminating in the woolly mammoth, which was largely a grazer, with stomach contents of woolly mammoths suggesting that they largely fed on grass and forbs. M. columbi is thought to have been a mixed feeder. Like living elephants, mammoth adults may have been largely invulnerable to non-human predation, though evidence has been found for the hunting of mammoth calves by predators, such as by the scimitar-toothed cat (Homotherium). Relationship with early humans Evidence that humans interacted with mammoths extends back to around 1.8 million years ago, with a number of bones of Mammuthus meridionalis from the Dmanisi site in Georgia having marks suggested to the result of butchery by archaic humans, likely as a result of scavenging. During the Last Glacial Period, modern humans hunted woolly mammoths, used their remains to create art and tools, and depicted them in works of art. Remains of Columbian mammoths at a number of sites suggest that they were hunted by Paleoindians, the first humans to inhabit the Americas. A possible bone engraving of a Columbian mammoth made by Paleoindians is known from Vero Beach, Florida. Extinction Following the end of the Last Glacial Maximum, the range of the woolly mammoth began to contract, disappearing from most of Europe by 14,000 years ago. By the Younger Dryas (around 12,900-11,700 years Before Present), woolly mammoths were confined to the northernmost regions of Siberia. This contraction is suggested to have been caused by the warming induced expansion of unfavourable wet tundra and forest environments at the expense of the preferred dry open mammoth steppe, with the possible additional pressure of human hunting. The last woolly mammoths in mainland Siberia became extinct around 10,000 years ago, during the early Holocene. The final extinction of mainland woolly mammoths may have been driven by human hunting. Relict populations survived on Saint Paul island in the Bering Strait until around 5,600 years ago, with their extinction likely due to the degradation of freshwater sources, and on Wrangel Island off the coast of Northeast Siberia until around 4,000 years ago. The last reliable dates of the Columbian mammoth date to around 12,500 years ago. Columbian mammoths became extinct as part of the end-Pleistocene extinction event where most large mammals across the Americas became extinct approximately simultaneously at the end of the Late Pleistocene. Hunting of Columbian mammoths by Paleoindians may have been a contributory factor in their extinction. The timing of the extinction of the dwarf Sardinian mammoth Mammuthus lamarmorai is difficult to constrain precisely, though the youngest specimen likely dates to sometime around 57–29,000 years ago. The youngest records of the pygmy mammoth (Mammuthus exillis) date to around 13,000 years ago, coinciding with the reducing of the area of the Californian Channel Islands as a result of rising sea level, the earliest known humans in the Channel Islands, and climatic change resulting in the decline of the previously dominant conifer forest ecosystems and expansion of scrub and grassland.
Biology and health sciences
Proboscidea
null
20170
https://en.wikipedia.org/wiki/MIPS%20architecture
MIPS architecture
MIPS (Microprocessor without Interlocked Pipelined Stages) is a family of reduced instruction set computer (RISC) instruction set architectures (ISA) developed by MIPS Computer Systems, now MIPS Technologies, based in the United States. There are multiple versions of MIPS, including MIPS I, II, III, IV, and V, as well as five releases of MIPS32/64 (for 32- and 64-bit implementations, respectively). The early MIPS architectures were 32-bit; 64-bit versions were developed later. As of April 2017, the current version of MIPS is MIPS32/64 Release 6. MIPS32/64 primarily differs from MIPS I–V by defining the privileged kernel mode System Control Coprocessor in addition to the user mode architecture. The MIPS architecture has several optional extensions: MIPS-3D, a simple set of floating-point SIMD instructions dedicated to 3D computer graphics; MDMX (MaDMaX), a more extensive integer SIMD instruction set using 64-bit floating-point registers; MIPS16e, which adds compression to the instruction stream to reduce the memory programs require; and MIPS MT, which adds multithreading capability. Computer architecture courses in universities and technical schools often study the MIPS architecture. The architecture greatly influenced later RISC architectures such as Alpha. In March 2021, MIPS announced that the development of the MIPS architecture had ended as the company is making the transition to RISC-V. History The first version of the MIPS architecture was designed by MIPS Computer Systems for its R2000 microprocessor, the first MIPS implementation. Both MIPS and the R2000 were introduced together in 1985. When MIPS II was introduced, MIPS was renamed MIPS I to distinguish it from the new version. MIPS Computer Systems' R6000 microprocessor (1989) was the first MIPS II implementation. Designed for servers, the R6000 was fabricated and sold by Bipolar Integrated Technology, but was a commercial failure. During the mid-1990s, many new 32-bit MIPS processors for embedded systems were MIPS II implementations because the introduction of the 64-bit MIPS III architecture in 1991 left MIPS II as the newest 32-bit MIPS architecture until MIPS32 was introduced in 1999. MIPS Computer Systems' R4000 microprocessor (1991) was the first MIPS III implementation. It was designed for use in personal, workstation, and server computers. MIPS Computer Systems aggressively promoted the MIPS architecture and R4000, establishing the Advanced Computing Environment (ACE) consortium to advance its Advanced RISC Computing (ARC) standard, which aimed to establish MIPS as the dominant personal computing platform. ARC found little success in personal computers, but the R4000 (and the R4400 derivative) were widely used in workstation and server computers, especially by its largest user, Silicon Graphics. Other uses of the R4000 included high-end embedded systems and supercomputers. MIPS III was eventually implemented by a number of embedded microprocessors. Quantum Effect Design's R4600 (1993) and its derivatives was widely used in high-end embedded systems and low-end workstations and servers. MIPS Technologies' R4200 (1994), was designed for embedded systems, laptop, and personal computers. A derivative, the R4300i, fabricated by NEC Electronics, was used in the Nintendo 64 game console. The Nintendo 64, along with the PlayStation, were among the highest volume users of MIPS architecture processors in the mid-1990s. The first MIPS IV implementation was the MIPS Technologies R8000 microprocessor chipset (1994). The design of the R8000 began at Silicon Graphics, Inc. and it was only used in high-end workstations and servers for scientific and technical applications where high performance on large floating-point workloads was important. Later implementations were the MIPS Technologies R10000 (1996) and the Quantum Effect Devices R5000 (1996) and RM7000 (1998). The R10000, fabricated and sold by NEC Electronics and Toshiba, and its derivatives were used by NEC, Pyramid Technology, Silicon Graphics, and Tandem Computers (among others) in workstations, servers, and supercomputers. The R5000 and R7000 found use in high-end embedded systems, personal computers, and low-end workstations and servers. A derivative of the R5000 from Toshiba, the R5900, was used in Sony Computer Entertainment's Emotion Engine, which powered its PlayStation 2 game console. Announced on October 21, 1996, at the Microprocessor Forum 1996 alongside the MIPS Digital Media Extensions (MDMX) extension, MIPS V was designed to improve the performance of 3D graphics transformations. In the mid-1990s, a major use of non-embedded MIPS microprocessors were graphics workstations from Silicon Graphics. MIPS V was completed by the integer-only MDMX extension to provide a complete system for improving the performance of 3D graphics applications. MIPS V implementations were never introduced. On May 12, 1997, Silicon Graphics announced the H1 ("Beast") and H2 ("Capitan") microprocessors. The former was to have been the first MIPS V implementation, and was due to be introduced in the first half of 1999. The H1 and H2 projects were later combined and eventually canceled in 1998. While there have not been any MIPS V implementations, MIPS64 Release 1 (1999) was based on MIPS V and retains all of its features as an optional Coprocessor 1 (FPU) feature called Paired-Single. When MIPS Technologies was spun-out of Silicon Graphics in 1998, it refocused on the embedded market. Through MIPS V, each successive version was a strict superset of the previous version, but this property was found to be a problem, and the architecture definition was changed to define a 32-bit and a 64-bit architecture: MIPS32 and MIPS64. Both were introduced in 1999. MIPS32 is based on MIPS II with some additional features from MIPS III, MIPS IV, and MIPS V; MIPS64 is based on MIPS V. NEC, Toshiba and SiByte (later acquired by Broadcom) each obtained licenses for MIPS64 as soon as it was announced. Philips, LSI Logic, IDT, Raza Microelectronics, Inc., Cavium, Loongson Technology and Ingenic Semiconductor have since joined them. MIPS32/MIPS64 Release 5 was announced on December 6, 2012. According to the Product Marketing Director at MIPS, Release 4 was skipped because the number four is perceived as unlucky in many Asian cultures. In December 2018, Wave Computing, the new owner of the MIPS architecture, announced that MIPS ISA would be open-sourced in a program dubbed the MIPS Open initiative. The program was intended to open up access to the most recent versions of both the 32-bit and 64-bit designs making them available without any licensing or royalty fees as well as granting participants licenses to existing MIPS patents. In March 2019, one version of the architecture was made available under a royalty-free license, but later that year the program was shut down again. In March 2021, Wave Computing announced that the development of the MIPS architecture has ceased. The company has joined the RISC-V foundation and future processor designs will be based on the RISC-V architecture. In spite of this, some licensees such as Loongson continue with new extension of MIPS-compatible ISAs on their own. In January 2024, Loongson won a case over rights to use MIPS architecture. Design MIPS is a modular architecture supporting up to four coprocessors (CP0/1/2/3). In MIPS terminology, CP0 is the System Control Coprocessor (an essential part of the processor that is implementation-defined in MIPS I–V), CP1 is an optional floating-point unit (FPU) and CP2/3 are optional implementation-defined coprocessors (MIPS III removed CP3 and reused its opcodes for other purposes). For example, in the PlayStation video game console, CP2 is the Geometry Transformation Engine (GTE), which accelerates the processing of geometry in 3D computer graphics. Versions MIPS I MIPS is a load/store architecture (also known as a register-register architecture); except for the load/store instructions used to access memory, all instructions operate on the registers. Registers MIPS I has thirty-two 32-bit general-purpose registers (GPR). Register is hardwired to zero and writes to it are discarded. Register is the link register. For integer multiplication and division instructions, which run asynchronously from other instructions, a pair of 32-bit registers, HI and LO, are provided. There is a small set of instructions for copying data between the general-purpose registers and the HI/LO registers. The program counter has 32 bits. The two low-order bits always contain zero since MIPS I instructions are 32 bits long and are aligned to their natural word boundaries. Instruction formats Instructions are divided into three types: R (register), I (immediate), and J (jump). Every instruction starts with a 6-bit opcode. In addition to the opcode, R-type instructions specify three registers, a shift amount field, and a function field; I-type instructions specify two registers and a 16-bit immediate value; J-type instructions follow the opcode with a 26-bit jump target. The following are the three formats used for the core instruction set: CPU instructions MIPS I has instructions that load and store 8-bit bytes, 16-bit halfwords, and 32-bit words. Only one addressing mode is supported: base + displacement. Since MIPS I is a 32-bit architecture, loading quantities fewer than 32 bits requires the datum to be either sign-extended or zero-extended to 32 bits. The load instructions suffixed by "unsigned" perform zero extension; otherwise sign extension is performed. Load instructions source the base from the contents of a GPR (rs) and write the result to another GPR (rt). Store instructions source the base from the contents of a GPR (rs) and the store data from another GPR (rt). All load and store instructions compute the memory address by summing the base with the sign-extended 16-bit immediate. MIPS I requires all memory accesses to be aligned to their natural word boundaries, otherwise an exception is signaled. To support efficient unaligned memory accesses, there are load/store word instructions suffixed by "left" or "right". All load instructions are followed by a load delay slot. The instruction in the load delay slot cannot use the data loaded by the load instruction. The load delay slot can be filled with an instruction that is not dependent on the load; a nop is substituted if such an instruction cannot be found. MIPS I has instructions to perform addition and subtraction. These instructions source their operands from two GPRs (rs and rt), and write the result to a third GPR (rd). Alternatively, addition can source one of the operands from a 16-bit immediate (which is sign-extended to 32 bits). The instructions for addition and subtraction have two variants: by default, an exception is signaled if the result overflows; instructions with the "unsigned" suffix do not signal an exception. The overflow check interprets the result as a 32-bit two's complement integer. MIPS I has instructions to perform bitwise logical AND, OR, XOR, and NOR. These instructions source their operands from two GPRs and write the result to a third GPR. The AND, OR, and XOR instructions can alternatively source one of the operands from a 16-bit immediate (which is zero-extended to 32 bits). The Set on relation instructions write one or zero to the destination register if the specified relation is true or false. These instructions source their operands from two GPRs or one GPR and a 16-bit immediate (which is sign-extended to 32 bits), and write the result to a third GPR. By default, the operands are interpreted as signed integers. The variants of these instructions that are suffixed with "unsigned" interpret the operands as unsigned integers (even those that source an operand from the sign-extended 16-bit immediate). The Load Upper Immediate instruction copies the 16-bit immediate into the high-order 16 bits of a GPR. It is used in conjunction with the Or Immediate instruction to load a 32-bit immediate into a register. MIPS I has instructions to perform left and right logical shifts and right arithmetic shifts. The operand is obtained from a GPR (rt), and the result is written to another GPR (rd). The shift distance is obtained from either a GPR (rs) or a 5-bit "shift amount" (the "sa" field). MIPS I has instructions for signed and unsigned integer multiplication and division. These instructions source their operands from two GPRs and write their results to a pair of 32-bit registers called HI and LO, since they may execute separately from (and concurrently with) the other CPU instructions. For multiplication, the high- and low-order halves of the 64-bit product is written to HI and LO (respectively). For division, the quotient is written to LO and the remainder to HI. To access the results, a pair of instructions (Move from HI and Move from LO) is provided to copy the contents of HI or LO to a GPR. These instructions are interlocked: reads of HI and LO do not proceed past an unfinished arithmetic instruction that will write to HI and LO. Another pair of instructions (Move to HI or Move to LO) copies the contents of a GPR to HI and LO. These instructions are used to restore HI and LO to their original state after exception handling. Instructions that read HI or LO must be separated by two instructions that do not write to HI or LO. All MIPS I control flow instructions are followed by a branch delay slot. Unless the branch delay slot is filled by an instruction performing useful work, an nop is substituted. MIPS I branch instructions compare the contents of a GPR (rs) against zero or another GPR (rt) as signed integers and branch if the specified condition is true. Control is transferred to the address computed by shifting the 16-bit offset left by two bits, sign-extending the 18-bit result, and adding the 32-bit sign-extended result to the sum of the program counter (instruction address) and 810. Jumps have two versions: absolute and register-indirect. Absolute jumps ("Jump" and "Jump and Link") compute the address to which control is transferred by shifting the 26-bit instr_index left by two bits and concatenating the 28-bit result with the four high-order bits of the address of the instruction in the branch delay slot. Register-indirect jumps transfer control to the instruction at the address sourced from a GPR (rs). The address sourced from the GPR must be word-aligned, else an exception is signaled after the instruction in the branch delay slot is executed. Branch and jump instructions that link (except for "Jump and Link Register") save the return address to GPR 31. The "Jump and Link Register" instruction permits the return address to be saved to any writable GPR. MIPS I has two instructions for software to signal an exception: System Call and Breakpoint. System Call is used by user mode software to make kernel calls; and Breakpoint is used to transfer control to a debugger via the kernel's exception handler. Both instructions have a 20-bit Code field that can contain operating environment-specific information for the exception handler. MIPS has 32 floating-point registers. Two registers are paired for double precision numbers. Odd numbered registers cannot be used for arithmetic or branching, just as part of a double precision register pair, resulting in 16 usable registers for most instructions (moves/copies and loads/stores were not affected). Single precision is denoted by the .s suffix, while double precision is denoted by the .d suffix. MIPS II MIPS II removed the load delay slot and added several sets of instructions. For shared-memory multiprocessing, the Synchronize Shared Memory, Load Linked Word, and Store Conditional Word instructions were added. A set of Trap-on-Condition instructions were added. These instructions caused an exception if the evaluated condition is true. All existing branch instructions were given branch-likely versions that executed the instruction in the branch delay slot only if the branch is taken. These instructions improve performance in certain cases by allowing useful instructions to fill the branch delay slot. Doubleword load and store instructions for COP1–3 were added. Consistent with other memory access instructions, these loads and stores required the doubleword to be naturally aligned. The instruction set for the floating point coprocessor also had several instructions added to it. An IEEE 754-compliant floating-point square root instruction was added. It supported both single- and double-precision operands. A set of instructions that converted single- and double-precision floating-point numbers to 32-bit words were added. These complemented the existing conversion instructions by allowing the IEEE rounding mode to be specified by the instruction instead of the Floating Point Control and Status Register. MIPS III MIPS III is a backwards-compatible extension of MIPS II that added support for 64-bit memory addressing and integer operations. The 64-bit data type is called a doubleword, and MIPS III extended the general-purpose registers, HI/LO registers, and program counter to 64 bits to support it. New instructions were added to load and store doublewords, to perform integer addition, subtraction, multiplication, division, and shift operations on them, and to move doubleword between the GPRs and HI/LO registers. For shared-memory multiprocessing, the Load Linked Double Word, and Store Conditional Double Word instructions were added. Existing instructions originally defined to operate on 32-bit words were redefined, where necessary, to sign-extend the 32-bit results to permit words and doublewords to be treated identically by most instructions. Among those instructions redefined was Load Word. In MIPS III it sign-extends words to 64 bits. To complement Load Word, a version that zero-extends was added. The R instruction format's inability to specify the full shift distance for 64-bit shifts (its 5-bit shift amount field is too narrow to specify the shift distance for doublewords) required MIPS III to provide three 64-bit versions of each MIPS I shift instruction. The first version is a 64-bit version of the original shift instructions, used to specify constant shift distances of 0–31 bits. The second version is similar to the first, but adds 3210 the shift amount field's value so that constant shift distances of 32–63 bits can be specified. The third version obtains the shift distance from the six low-order bits of a GPR. MIPS III added a supervisor privilege level in between the existing kernel and user privilege levels. This feature only affected the implementation-defined System Control Processor (Coprocessor 0). MIPS III removed the Coprocessor 3 (CP3) support instructions, and reused its opcodes for the new doubleword instructions. The remaining coprocessors gained instructions to move doublewords between coprocessor registers and the GPRs. The floating general registers (FGRs) were extended to 64 bits and the requirement for instructions to use even-numbered register only was removed. This is incompatible with earlier versions of the architecture; a bit in the floating-point control/status register is used to operate the MIPS III floating-point unit (FPU) in a MIPS I- and II-compatible mode. The floating-point control registers were not extended for compatibility. The only new floating-point instructions added were those to copy doublewords between the CPU and FPU convert single- and double-precision floating-point numbers into doubleword integers and vice versa. MIPS IV MIPS IV is the fourth version of the architecture. It is a superset of MIPS III and is compatible with all existing versions of MIPS. MIPS IV was designed to mainly improve floating-point (FP) performance. To improve access to operands, an indexed addressing mode (base + index, both sourced from GPRs) for FP loads and stores was added, as were prefetch instructions for performing memory prefetching and specifying cache hints (these supported both the base + offset and base + index addressing modes). MIPS IV added several features to improve instruction-level parallelism. To alleviate the bottleneck caused by a single condition bit, seven condition code bits were added to the floating-point control and status register, bringing the total to eight. FP comparison and branch instructions were redefined so they could specify which condition bit was written or read (respectively); and the delay slot in between an FP branch that read the condition bit written to by a prior FP comparison was removed. Support for partial predication was added in the form of conditional move instructions for both GPRs and FPRs; and an implementation could choose between having precise or imprecise exceptions for IEEE 754 traps. MIPS IV added several new FP arithmetic instructions for both single- and double-precision FPNs: fused-multiply add or subtract, reciprocal, and reciprocal square-root. The FP fused-multiply add or subtract instructions perform either one or two roundings (it is implementation-defined), to exceed or meet IEEE 754 accuracy requirements (respectively). The FP reciprocal and reciprocal square-root instructions do not comply with IEEE 754 accuracy requirements, and produce results that differ from the required accuracy by one or two units of last place (it is implementation defined). These instructions serve applications where instruction latency is more important than accuracy. MIPS V MIPS V added a new data type, the Paired Single (PS), which consisted of two single-precision (32-bit) floating-point numbers stored in the existing 64-bit floating-point registers. Variants of existing floating-point instructions for arithmetic, compare and conditional move were added to operate on this data type in a SIMD fashion. New instructions were added for loading, rearranging and converting PS data. It was the first instruction set to exploit floating-point SIMD with existing resources. MIPS32/MIPS64 The first release of MIPS32, based on MIPS II, added conditional moves, prefetch instructions, and other features from the R4000 and R5000 families of 64-bit processors. The first release of MIPS64 adds a MIPS32 mode to run 32-bit code. The MUL and MADD (multiply-add) instructions, previously available in some implementations, were added to the MIPS32 and MIPS64 specifications, as were cache control instructions. For the purpose of cache control, both SYNC and SYNCI instructions were prepared. MIPS32/MIPS64 Release 6 in 2014 added the following: a new family of branches with no delay slot: unconditional branches (BC) and branch-and-link (BALC) with a 26-bit offset, conditional branch on zero/non-zero with a 21-bit offset, full set of signed and unsigned conditional branches compare between two registers (e.g. BGTUC) or a register against zero (e.g. BGTZC), full set of branch-and-link which compare a register against zero (e.g. BGTZALC). index jump instructions with no delay slot designed to support large absolute addresses. instructions to load 16-bit immediate at bit position 16, 32 or 48, allowing to easily generate large constants. PC-relative load instructions, as well as address generation with large (PC-relative) offsets. bit-reversal and byte-alignment instructions (previously only available with the DSP extension). multiply and divide instructions redefined so that they use a single register for their result). instructions generating truth values now generate all zeroes or all ones instead of just clearing/setting the 0-bit, instructions using a truth value now only interpret all-zeroes as false instead of just looking at the 0-bit. Removed infrequently used instructions: some conditional moves branch likely instructions (deprecated in previous releases). integer overflow trapping instructions with 16-bit immediate integer accumulator instructions (together HI/LO registers, moved to the DSP Application-Specific Extension) unaligned load instructions (LWL and LWR), (requiring that most ordinary loads and stores support misaligned access, possibly via trapping and with the addition of a new instruction (BALIGN)) Reorganized the instruction encoding, freeing space for future expansions. microMIPS The microMIPS32/64 architectures are supersets of the MIPS32 and MIPS64 architectures (respectively) designed to replace the MIPS16e ASE. A disadvantage of MIPS16e is that it requires a mode switch before any of its 16-bit instructions can be processed. microMIPS adds versions of the most-frequently used 32-bit instructions that are encoded as 16-bit instructions. This allows programs to intermix 16- and 32-bit instructions without having to switch modes. microMIPS was introduced alongside of MIPS32/64 Release 3, and each subsequent release of MIPS32/64 has a corresponding microMIPS32/64 version. A processor may implement microMIPS32/64 or both microMIPS32/64 and its corresponding MIPS32/64 subset. Starting with MIPS32/64 Release 6, support for MIPS16e ended, and microMIPS is the only form of code compression in MIPS. Application-specific extensions The base MIPS32 and MIPS64 architectures can be supplemented with a number of optional architectural extensions, which are collectively referred to as application-specific extensions (ASEs). These ASEs provide features that improve the efficiency and performance of certain workloads, such as digital signal processing. MIPS MCU Enhancements for microcontroller applications. The MCU ASE (application-specific extension) has been developed to extend the interrupt controller support, reduce the interrupt latency and enhance the I/O peripheral control function typically required in microcontroller system designs. Separate priority and vector generation Supports up to 256 interrupts in EIC (External Interrupt Controller) mode and eight hardware interrupt pins Provides 16-bit vector offset address Pre-fetching of the interrupt exception vector Automated Interrupt Prologue – adds hardware to save and update system status before the interrupt handling routine Automated Interrupt Epilogue – restores the system state previously stored in the stack for returning from the interrupt. Interrupt Chaining – supports the service of pending interrupts without the need to exit the initial interrupt routine, saving the cycles required to store and restore multiple active interrupts Supports speculative pre-fetching of the interrupt vector address. Reduces the number of interrupt service cycles by overlapping memory accesses with pipeline flushes and exception prioritization Includes atomic bit set/clear instructions which enables bits within an I/O register that are normally used to monitor or control external peripheral functions to be modified without interruption, ensuring the action is performed securely. MIPS16 MIPS16 is an Application-Specific Extension for MIPS I through to V designed by LSI Logic and MIPS Technologies, announced on October 21, 1996, alongside its first implementation, the LSI Logic TinyRISC processor. MIPS16 was subsequently licensed by NEC Electronics, Philips Semiconductors, and Toshiba (among others); and implemented as an extension to the MIPS I, II, an III architectures. MIPS16 decreases the size of application by up to 40% by using 16-bit instructions instead of 32-bit instructions and also improves power efficiency, the instruction cache hit rate, and is equivalent in performance to its base architecture. It is supported by hardware and software development tools from MIPS Technologies and other providers. MIPS16e is an improved version of MIPS16 first supported by MIPS32 and MIPS64 Release 1. MIPS16e2 is an improved version of MIPS16 that is supported by MIPS32 and MIPS64 (up to Release 5). Release 6 replaced it with microMIPS. MIPS Digital Signal Processing (DSP) The DSP ASE is an optional extension to the MIPS32/MIPS64 Release 2 and newer instruction sets which can be used to accelerate a large range of "media" computations—particularly audio and video. The DSP module comprises a set of instructions and state in the integer pipeline and requires minimal additional logic to implement in MIPS processor cores. Revision 2 of the ASE was introduced in the second half of 2006. This revision adds extra instructions to the original ASE, but is otherwise backwards-compatible with it. Unlike the bulk of the MIPS architecture, it's a fairly irregular set of operations, many chosen for a particular relevance to some key algorithm. Its main novel features (vs original MIPS32): Saturating arithmetic (when a calculation overflows, deliver the representable number closest to the non-overflowed answer). Fixed-point arithmetic on signed 32- and 16-bit fixed-point fractions with a range of -1 to +1 (these are widely called "Q31" and "Q15"). The existing integer multiplication and multiply-accumulate instructions, which deliver results into a double-size accumulator (called "hi/lo" and 64 bits on MIPS32 CPUs). The DSP ASE adds three more accumulators, and some different flavours of multiply-accumulate. SIMD instructions operating on 4 x unsigned bytes or 2 x 16-bit values packed into a 32-bit register (the 64-bit variant of the DSP ASE supports larger vectors, too). SIMD operations are basic arithmetic, shifts and some multiply-accumulate type operations. MIPS SIMD architecture (MSA) Instruction set extensions designed to accelerate multimedia. 32 vector registers of 16 x 8-bit, 8 x 16-bit, 4 x 32-bit, and 2 x 64 bit vector elements Efficient vector parallel arithmetic operations on integer, fixed-point and floating-point data Operations on absolute value operands Rounding and saturation options available Full precision multiply and multiply-add Conversions between integer, floating-point, and fixed-point data Complete set of vector-level compare and branch instructions with no condition flag Vector (1D) and array (2D) shuffle operations Typed load and store instructions for endian-independent operation IEEE Standard for Floating-Point Arithmetic 754-2008 compliant Element precise floating-point exception signaling Pre-defined scalable extensions for chips with more gates/transistors Accelerates compute-intensive applications in conjunction with leveraging generic compiler support Software-programmable solution for consumer electronics applications or functions not covered by dedicated hardware Emerging data mining, feature extraction, image and video processing, and human-computer interaction applications High-performance scientific computing MIPS virtualization Hardware supported virtualization technology. MIPS multi-threading Each multi-threaded MIPS core can support up to two VPEs (Virtual Processing Elements) which share a single pipeline as well as other hardware resources. However, since each VPE includes a complete copy of the processor state as seen by the software system, each VPE appears as a complete standalone processor to an SMP Linux operating system. For more fine-grained thread processing applications, each VPE is capable of supporting up to nine TCs allocated across two VPEs. The TCs share a common execution unit but each has its own program counter and core register files so that each can handle a thread from the software. The MIPS MT architecture also allows the allocation of processor cycles to threads, and sets the relative thread priorities with an optional Quality of Service (QoS) manager block. This enables two prioritization mechanisms that determine the flow of information across the bus. The first mechanism allows the user to prioritize one thread over another. The second mechanism is used to allocate a specified ratio of the cycles to specific threads over time. The combined use of both mechanisms allows effective allocation of bandwidth to the set of threads, and better control of latencies. In real-time systems, system-level determinism is very critical, and the QoS block facilitates improvement of the predictability of a system. Hardware designers of advanced systems may replace the standard QoS block provided by MIPS Technologies with one that is specifically tuned for their application. SmartMIPS SmartMIPS is an Application-Specific Extension (ASE) designed by Gemplus International and MIPS Technologies to improve performance and reduce memory consumption for smart card software. It is supported by MIPS32 only, since smart cards do not require the capabilities of MIPS64 processors. Few smart cards use SmartMIPS. MIPS Digital Media eXtension (MDMX) Multimedia application accelerations that were common in the 1990s on RISC and CISC systems. MIPS-3D Additional instructions for improving the performance of 3D graphics applications Calling conventions MIPS has had several calling conventions, especially on the 32-bit platform. The O32 ABI is the most commonly-used ABI, owing to its status as the original System V ABI for MIPS. It is strictly stack-based, with only four registers - available to pass arguments. Space on the stack is reserved in case the callee needs to save its arguments, but the registers are not stored there by the caller. The return value is stored in register ; a second return value may be stored in . The ABI took shape in 1990 and was last updated in 1994. This perceived slowness, along with an antique floating-point model with only 16 registers, has encouraged the proliferation of many other calling conventions. It is only defined for 32-bit MIPS, but GCC has created a 64-bit variation called O64. For 64-bit, the N64 ABI by Silicon Graphics is most commonly used. The most important improvement is that eight registers are now available for argument passing; it also increases the number of floating-point registers to 32. There is also an ILP32 version called N32, which uses 32-bit pointers for smaller code, analogous to the x32 ABI. Both run under the 64-bit mode of the CPU. The N32 and N64 ABIs pass the first eight arguments to a function in the registers -; subsequent arguments are passed on the stack. The return value (or a pointer to it) is stored in the registers ; a second return value may be stored in . In both the N32 and N64 ABIs all registers are considered to be 64-bits wide. A few attempts have been made to replace O32 with a 32-bit ABI that resembles N32 more. A 1995 conference came up with MIPS EABI, for which the 32-bit version was quite similar. EABI inspired MIPS Technologies to propose a more radical "NUBI" ABI additionally reuse argument registers for the return value. MIPS EABI is supported by GCC but not LLVM, and neither supports NUBI. For all of O32 and N32/N64, the return address is stored in a register. This is automatically set with the use of the JAL (jump and link) or JALR (jump and link register) instructions. The function prologue of a (non-leaf) MIPS subroutine pushes the return address (in ) to the stack. On both O32 and N32/N64 the stack grows downwards, but the N32/N64 ABIs require 64-bit alignment for all stack entries. The frame pointer () is optional and in practice rarely used except when the stack allocation in a function is determined at runtime, for example, by calling alloca(). For N32 and N64, the return address is typically stored 8 bytes before the stack pointer although this may be optional. For the N32 and N64 ABIs, a function must preserve the - registers, the global pointer ( or ), the stack pointer ( or ) and the frame pointer (). The O32 ABI is the same except the calling function is required to save the register instead of the called function. For multi-threaded code, the thread local storage pointer is typically stored in special hardware register and is accessed by using the mfhw (move from hardware) instruction. At least one vendor is known to store this information in the register which is normally reserved for kernel use, but this is not standard. The and registers (–) are reserved for kernel use and should not be used by applications since these registers can be changed at any time by the kernel due to interrupts, context switches or other events. Registers that are preserved across a call are registers that (by convention) will not be changed by a system call or procedure (function) call. For example, $s-registers must be saved to the stack by a procedure that needs to use them, and and are always incremented by constants, and decremented back after the procedure is done with them (and the memory they point to). By contrast, is changed automatically by any normal function call (ones that use jal), and $t-registers must be saved by the program before any procedure call (if the program needs the values inside them after the call). The userspace calling convention of position-independent code on Linux additionally requires that when a function is called the register must contain the address of that function. This convention dates back to the System V ABI supplement for MIPS. Uses MIPS processors are used in embedded systems such as residential gateways and routers. Originally, MIPS was designed for general-purpose computing. During the 1980s and 1990s, MIPS processors for personal, workstation, and server computers were used by many companies such as Digital Equipment Corporation, MIPS Computer Systems, NEC, Pyramid Technology, SiCortex, Siemens Nixdorf, Silicon Graphics, and Tandem Computers. Historically, video game consoles such as the Nintendo 64, Sony PlayStation, PlayStation 2, and PlayStation Portable used MIPS processors. MIPS processors also used to be popular in supercomputers during the 1990s, but all such systems have dropped off the TOP500 list. These uses were complemented by embedded applications at first, but during the 1990s, MIPS became a major presence in the embedded processor market, and by the 2000s, most MIPS processors were for these applications. In the mid- to late-1990s, it was estimated that one in three RISC microprocessors produced was a MIPS processor. By the late 2010s, MIPS machines were still commonly used in embedded markets, including automotive, wireless router, LTE modems (mainly via MediaTek), and microcontrollers (for example the Microchip Technology PIC32M). They have mostly faded out of the personal, server, and application space. Simulators Open Virtual Platforms (OVP) includes the freely available for non-commercial use simulator OVPsim, a library of models of processors, peripherals and platforms, and APIs which enable users to develop their own models. The models in the library are open source, written in C, and include the MIPS 4K, 24K, 34K, 74K, 1004K, 1074K, M14K, microAptiv, interAptiv, proAptiv 32-bit cores and the MIPS 64-bit 5K range of cores. These models are created and maintained by Imperas and in partnership with MIPS Technologies have been tested and assigned the MIPS-Verified mark. Sample MIPS-based platforms include both bare metal environments and platforms for booting unmodified Linux binary images. These platforms–emulators are available as source or binaries and are fast, free for non-commercial usage, and are easy to use. OVPsim is developed and maintained by Imperas and is very fast (hundreds of million of instructions per second), and built to handle multicore homogeneous and heterogeneous architectures and systems. There is a freely available MIPS32 simulator (earlier versions simulated only the R2000/R3000) called SPIM for use in education. EduMIPS64 is a GPL graphical cross-platform MIPS64 CPU simulator, written in Java/Swing. It supports a wide subset of the MIPS64 ISA and allows the user to graphically see what happens in the pipeline when an assembly program is run by the CPU. MARS is another GUI-based MIPS emulator designed for use in education, specifically for use with Hennessy's Computer Organization and Design. WebMIPS is a browser-based MIPS simulator with visual representation of a generic, pipelined processor. This simulator is quite useful for register tracking during step by step execution. QtMips provides a simple 5-stage pipeline visualization as well as cache principle visualization for basic computer architectures courses. It is available both as a web application and as a downloadable program for Windows, Linux, and macOS. More advanced free emulators are available from the GXemul (formerly known as the mips64emul project) and QEMU projects. These emulate the various MIPS III and IV microprocessors in addition to entire computer systems which use them. Commercial simulators are available especially for the embedded use of MIPS processors, for example Wind River Simics (MIPS 4Kc and 5Kc, PMC RM9000, QED RM7000, Broadcom/Netlogic ec4400, Cavium Octeon I), Imperas (all MIPS32 and MIPS64 cores), VaST Systems (R3000, R4000), and CoWare (the MIPS4KE, MIPS24K, MIPS25Kf and MIPS34K). The Creator simulator is portable and allows the user to learn various assembly languages of different processors (Creator has examples with an implementation of MIPS32 and RISC-V instructions). WepSIM is a browser-based simulator where a subset of MIPS instructions are micro-programmed. This simulator is very useful in order to learn how a CPU works (microprogramming , MIPS routines , interruptions , system calls , etc.)
Technology
Computer architecture concepts
null
20204
https://en.wikipedia.org/wiki/Manatee
Manatee
Manatees (, family Trichechidae, genus Trichechus) are large, fully aquatic, mostly herbivorous marine mammals sometimes known as sea cows. There are three accepted living species of Trichechidae, representing three of the four living species in the order Sirenia: the Amazonian manatee (Trichechus inunguis), the West Indian manatee (Trichechus manatus), and the West African manatee (Trichechus senegalensis). They measure up to long, weigh as much as , and have paddle-like tails. Manatees are herbivores and eat over 60 different freshwater and saltwater plants. Manatees inhabit the shallow, marshy coastal areas and rivers of the Caribbean Sea, the Gulf of Mexico, the Amazon basin, and West Africa. The main causes of death for manatees are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious nature has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller blades. Natural causes of death include adverse temperatures, predation by crocodiles on young, and disease. Etymology The etymology of the name is unclear, with connections having been made to Latin "hand" and to the term manaty "breast" from the Carib language of native South Americans. The Carib term may refer to the mammary glands of the manatee, which are located on their chests under their armpits. The term sea cow is a reference to the species' slow, peaceful, herbivorous nature, reminiscent of that of bovines. Taxonomy Manatees are three of the four living species in the order Sirenia. The fourth is the Eastern Hemisphere's dugong. The Sirenia are thought to have evolved from four-legged land mammals more than 60 million years ago, with the closest living relatives being the Proboscidea (elephants) and Hyracoidea (hyraxes). Description Manatees weigh , and average in length, sometimes growing to and and females tend to be larger and heavier than males. At birth, baby manatees weigh about each. The female manatee has two teats, one under each flipper, a characteristic that was used to make early links between the manatee and elephants. The lids of manatees' small, widely spaced eyes close in a circular manner. The manatee has a large, flexible, prehensile upper lip, used to gather food and eat and for social interaction and communication. Manatees have shorter snouts than their fellow sirenians, the dugongs. Manatee adults have no incisor or canine teeth, just a set of cheek teeth, which are not clearly differentiated into molars and premolars. These teeth are repeatedly replaced throughout life, with new teeth growing at the rear as older teeth fall out from farther forward in the mouth, somewhat as elephants' teeth do. At any time, a manatee typically has no more than six teeth in each jaw of its mouth. The manatee's tail is paddle-shaped, and is the clearest visible difference between manatees and dugongs; a dugong tail is fluked, similar in shape to that of a whale. The manatee is unusual among mammals in having just six cervical vertebrae, a number that may be due to mutations in the homeotic genes. All other mammals have seven cervical vertebrae, other than the two-toed and three-toed sloths. Like the horse, the manatee has a simple stomach, but a large cecum, in which it can digest tough plant matter. Generally, the intestines are about 45 meters, unusually long for an animal of the manatee's size. Evolution Fossil remains of manatee ancestors - also known as sirenians - date back to the Early Eocene. It is thought that they reached the isolated area of the South American continent and became known as Trichechidae. In the Late Miocene, trichechids were likely restricted in South American coastal rivers and they fed on many freshwater plants. Dugongs inhabited the West Atlantic and Caribbean waters and fed on seagrass meadows instead. As the sea grasses began to grow, manatees adapted to the changing environment by growing supernumerary molars. Sea levels lowered and increased erosion and silt runoff was caused by glaciation. This increased the tooth wear of the bottom-feeding manatees. Behavior Apart from mothers with their young, or males following a receptive female, manatees are generally solitary animals. Manatees spend approximately 50% of the day sleeping submerged, surfacing for air regularly at intervals of less than 20 minutes. The remainder of the time is mostly spent grazing in shallow waters at depths of . The Florida subspecies (T. m. latirostris) has been known to live up to 60 years. Locomotion Generally, manatees swim at about . However, they have been known to swim at up to in short bursts. Intelligence and learning Manatees are capable of understanding discrimination tasks and show signs of complex associative learning. They also have good long-term memory. They demonstrate discrimination and task-learning abilities similar to dolphins and pinnipeds in acoustic and visual studies. Social interactions between manatees are highly complex and intricate, which may indicate higher intelligence than previously thought, although they remain poorly understood by science. Reproduction Manatees typically breed once every two years; generally only a single calf is born. Gestation lasts about 12 months and to wean the calf takes a further 12 to 18 months, although females may have more than one estrous cycle per year. Communication Manatees emit a wide range of sounds used in communication, especially between cows and their calves. Their ears are large internally but the external openings are small, and they are located four inches behind each eye. Adults communicate to maintain contact and during sexual and play behaviors. Taste and smell, in addition to sight, sound, and touch, may also be forms of communication. Diet Manatees are herbivores and eat over 60 different freshwater (e.g., floating hyacinth, pickerel weed, alligator weed, water lettuce, hydrilla, water celery, musk grass, mangrove leaves) and saltwater plants (e.g., sea grasses, shoal grass, manatee grass, turtle grass, widgeon grass, sea clover, and marine algae). Using their divided upper lip, an adult manatee will commonly eat up to 10%–15% of their body weight (about 50 kg) per day. Consuming such an amount requires the manatee to graze for up to seven hours a day. To be able to cope with the high levels of cellulose in their plant based diet, manatees utilize hindgut fermentation to help with the digestion process. Manatees have been known to eat small numbers of fish from nets. Feeding behavior Manatees use their flippers to "walk" along the bottom whilst they dig for plants and roots in the substrate. When plants are detected, the flippers are used to scoop the vegetation toward the manatee's lips. The manatee has prehensile lips; the upper lip pad is split into left and right sides which can move independently. The lips use seven muscles to manipulate and tear at plants. Manatees use their lips and front flippers to move the plants into the mouth. The manatee does not have front teeth, however, behind the lips, on the roof of the mouth, there are dense, ridged pads. These horny ridges, and the manatee's lower jaw, tear through ingested plant material. Dentition Manatees have four rows of teeth. There are 6 to 8 high-crowned, open-rooted molars located along each side of the upper and lower jaw giving a total of 24 to 32 flat, rough-textured teeth. Eating gritty vegetation abrades the teeth, particularly the enamel crown; however, research indicates that the enamel structure in manatee molars is weak. To compensate for this, manatee teeth are continually replaced. When anterior molars wear down, they are shed. Posterior molars erupt at the back of the row and slowly move forward to replace these like enamel crowns on a conveyor belt, similarly to elephants. This process continues throughout the manatee's lifetime. The rate at which the teeth migrate forward depends on how quickly the anterior teeth abrade. Some studies indicate that the rate is about 1 cm/month although other studies indicate 0.1 cm/month. Ecology Range and habitat Manatees inhabit the shallow, marshy coastal areas and rivers of the Caribbean Sea and the Gulf of Mexico (T. manatus, West Indian manatee), the Amazon basin (T. inunguis, Amazonian manatee), and West Africa (T. senegalensis, West African manatee). West Indian manatees prefer warmer temperatures and are known to congregate in shallow waters. They frequently migrate through brackish water estuaries to freshwater springs. They cannot survive below 15 °C (60 °F). Their natural source for warmth during winter is warm, spring-fed rivers. West Indian The coast of the state of Georgia is usually the northernmost range of the West Indian manatees because their low metabolic rate does not protect them in cold water. Prolonged exposure to water below 20 °C (68 °F) can cause "cold stress syndrome" and death. West Indian manatees can move freely between fresh water and salt water. However, studies suggest that they are susceptible to dehydration if freshwater is not available for an extended period of time. Manatees can travel hundreds of miles annually, and have been seen as far north as Cape Cod, and in 1995 and again in 2006, one was seen in New York City and Rhode Island's Narragansett Bay. A manatee was spotted in the Wolf River harbor near the Mississippi River in downtown Memphis in 2006, and was later found dead downriver in McKellar Lake. Another manatee was found dead on a New Jersey beach in February 2020, considered especially unusual given the time of year. At the time of the manatee's discovery, the water temperature in the area was below 6.5 °C (43.7 °F). The West Indian manatee migrates into Florida rivers—such as the Crystal, the Homosassa, and the Chassahowitzka rivers, whose headsprings are 22 °C (72 °F) all year. Between November and March each year, about 600 West Indian manatees gather in the rivers in Citrus County, Florida such as the Crystal River National Wildlife Refuge. In winter, manatees often gather near the warm-water outflows of power plants along the Florida coast, instead of migrating south as they once did. Some conservationists are concerned that these manatees have become too reliant on these artificially warmed areas. Accurate population estimates of the West Indian manatee in Florida are difficult. They have been called scientifically weak because they vary widely from year to year, with most areas showing decreases, and little strong evidence of increases except in two areas. Manatee counts are highly variable without an accurate way to estimate numbers. In Florida in 1996, a winter survey found 2,639 manatees; in 1997, a January survey found 2,229, and a February survey found 1,706. A statewide synoptic survey in January 2010 found 5,067 manatees living in Florida, the highest number recorded to that time. As of January 2016, the USFWS estimates the range-wide West Indian manatee population to be at least 13,000; as of January 2018, at least 6,100 are estimated to be in Florida. Population viability studies conducted in 1997 found that decreasing adult survival and eventual extinction were probable future outcomes for Florida manatees unless they received more protection. The U.S. Fish and Wildlife Service proposed downgrading the manatee's status from endangered to threatened in January 2016 after more than 40 years. There is a small population of the subspecies Antillean manatee (T. m. manatus) found in Mexico's Caribbean coastal area. The best estimate for this population is 200-250. As of 2022, a new manatee habitat was discovered by Klaus Thymann within the cenotes of Sian Ka'an Biosphere Reserve on the Yucatán Peninsula. The explorer and his team documented the discovery with a 12-minute film that is available on the interactive streaming platform WaterBear. The discovery got picked up by the New Scientist in 2024, who featured in a 10-minute short film. Amazonian The freshwater Amazonian manatee (T. inunguis) inhabits the Central Amazon Basin in Brazil, eastern Perú, southeastern Colombia, but not Ecuador. It is the only exclusively freshwater manatee, and is also the smallest. Since they are unable to reduce peripheral heat loss, it is found primarily in tropical waters. West African They are found in coastal marine and estuarine habitats, and in freshwater river systems along the west coast of Africa from the Senegal River south to the Cuanza River in Angola. They live as far upriver on the Niger River as Koulikoro in Mali, from the coast. Predation In relation to the threat posed by humans, predation does not present a significant threat to manatees. When threatened, the manatee's response is to dive as deeply as it can, suggesting that threats have most frequently come from land dwellers such as humans rather than from other water-dwelling creatures such as caimans or sharks. Relation to humans Threats The main causes of death for manatees are human-related issues, such as habitat destruction and human objects. Natural causes of death include adverse temperatures, predation by crocodiles on young, and disease. Ship strikes Their slow-moving, curious nature, coupled with dense coastal development, has led to many violent collisions with propeller-driven boats and ships, leading frequently to maiming, disfigurement, and even death. As a result, a large proportion of manatees exhibit spiral cutting propeller scars on their backs, usually caused by larger vessels that do not have skegs in front of the propellers like the smaller outboard and inboard-outboard recreational boats have. They are now even identified by humans based on their scar patterns. Many manatees have been cut in two by large vessels like ships and tug boats, even in the highly populated lower St. Johns River's narrow channels. Some are concerned that the current situation is inhumane, with upwards of 50 scars and disfigurements from vessel strikes on a single manatee. Often, the lacerations lead to infections, which can prove fatal. Internal injuries stemming from being trapped between hulls and docks and impacts have also been fatal. Recent testing shows that manatees may be able to hear speed boats and other watercraft approaching, due to the frequency the boat makes. However, a manatee may not be able to hear the approaching boats when they are performing day-to-day activities or distractions. The manatee has a tested frequency range of 8 to 32 kilohertz. Manatees hear on a higher frequency than would be expected for such large marine mammals. Many large boats emit very low frequencies, which confuse the manatee and explain their lack of awareness around boats. The Lloyd's mirror effect results in low frequency propeller sounds not being discernible near the surface, where most accidents occur. Research indicates that when a boat has a higher frequency the manatees rapidly swim away from danger. In 2003, a population model was released by the United States Geological Survey that predicted an extremely grave situation confronting the manatee in both the Southwest and Atlantic regions where the vast majority of manatees are found. It states, According to marine mammal veterinarians: These veterinarians go on to state: One quarter of annual manatee deaths in Florida are caused by boat collisions with manatees. In 2009, of the 429 Florida manatees recorded dead, 97 were killed by commercial and recreational vessels, which broke the earlier record number of 95 set in 2002. Red tide Another cause of manatee deaths are red tides, a term used for the proliferation, or "blooms", of the microscopic marine algae Karenia brevis. This dinoflagellate produces brevetoxins that can have toxic effects on the central nervous system of animals. In 1996, a red tide was responsible for 151 manatee deaths in Florida. The bloom was present from early March to the end of April and killed approximately 15% of the known population of manatees along South Florida's western coast. Other blooms in 1982 and 2005 resulted in 37 and 44 deaths respectively, and a red tide killed 123 manatees between November 2022 and June 2023. Starvation In 2021 a massive die-off of seagrass along the Atlantic coast of Florida left manatees without enough food to eat. As a result of this ecological disaster Florida's manatees began dying at an alarming rate, largely from starvation. In early 2022 the U.S. Fish and Wildlife Service began a feeding program to address the situation by distributing 3,000 pounds (1,361 kg) of lettuce per day to save the malnourished animals. Additional threats Manatees can also be crushed and isolated in water control structures (navigation locks, floodgates, etc.) and are occasionally killed by entanglement in fishing gear, such as crab pot float lines, box traps, and shark nets. While humans are allowed to swim with manatees in one area of Florida, there have been numerous charges of people harassing and disturbing the manatees. According to the United States Fish and Wildlife Service, approximately 99 manatee deaths each year are related to human activities. In January 2016, there were 43 manatee deaths in Florida alone. Conservation All three species of manatee are listed by the World Conservation Union as vulnerable to extinction. However, The U.S. Fish and Wildlife Service (FWS) does not consider the West Indian manatee to be "endangered" anymore, having downgraded its status to "threatened" as of March 2017. They cite improvements to habitat conditions, population growth and reductions of threats as reasoning for the change. The reclassification was met with controversy, with Florida congressman Vern Buchanan and groups such as the Save the Manatee Club and the Center for Biological Diversity expressing concerns that the change would have a detrimental effect on conservation efforts. The new classification will not affect current federal protections. West Indian manatees were originally classified as endangered with the 1967 class of endangered species. Manatee deaths in the state of Florida nearly doubled in 2021 from 637 (2020) to 1100. Although this number decreased to 800 in 2022, it is likely that current rate of development in Florida, climate change, and decreasing water quality, habitat range, and genetic diversity among this population may lead to reconsideration of the West Indian Manatee as an endangered species. Manatee population in the United States reached a low in the 1970s, during which only a few hundred individuals lived in the nation. As of February 2016, 6,250 manatees were reported swimming in Florida's springs. It is illegal under federal and Florida law to injure or harm a manatee. There are many conservation programs that have been created to help manatees. Save the Manatee Club is a non-profit group and membership organization that works to protect manatees and their aquatic ecosystems. Founded by Bob Graham, former Florida governor, and singer/songwriter Jimmy Buffett, this is today's leading manatee conservation club. The MV Freedom Star and MV Liberty Star, ships used by NASA to tow Space Shuttle Solid Rocket Boosters back to Kennedy Space Center, were propelled only by water jets to protect the endangered manatee population that inhabits regions of the Banana River where the ships are based. Brazil outlawed hunting in 1973 in an effort to preserve the species. Deaths by boat strikes are still common. Although countries are protecting Amazonian manatees in the locations where they are endangered, as of 1994 there were no enforced laws, and the manatees were still being captured throughout their range. Captivity There are a number of manatee rehabilitation centers in the United States. These include three government-run critical care facilities in Florida at Lowry Park Zoo, Miami Seaquarium, and SeaWorld Orlando. After initial treatment at these facilities, the manatees are transferred to rehabilitation facilities before release. These include the Cincinnati Zoo and Botanical Garden, Columbus Zoo and Aquarium, Epcot's The Seas, South Florida Museum, and Homosassa Springs Wildlife State Park. The Columbus Zoo was a founding member of the Manatee Rehabilitation Partnership in 2001. Since 1999, the zoo's Manatee Bay facility has helped rehabilitate 20 manatees. The Cincinnati Zoo has rehabilitated and released more than a dozen manatees since 1999. Manatees can also be viewed in a number of European zoos, such as the Tierpark Berlin and the Nuremberg Zoo in Germany, in ZooParc de Beauval in France, the Aquarium of Genoa in Italy and the Royal Burgers' Zoo in Arnhem, the Netherlands, where manatees have parented offspring. The River Safari at Singapore features seven of them. The oldest manatee in captivity was Snooty, at the South Florida Museum's Parker Manatee Aquarium in Bradenton, Florida. Born at the Miami Aquarium and Tackle Company on July 21, 1948, Snooty was one of the first recorded captive manatee births. Raised entirely in captivity, Snooty was never to be released into the wild. As such he was the only manatee at the aquarium, and one of only a few captive manatees in the United States that was allowed to interact with human handlers. That made him uniquely suitable for manatee research and education. Snooty died suddenly two days after his 69th birthday, July 23, 2017, when he was found in an underwater area only used to access plumbing for the exhibit life support system. The South Florida Museum's initial press release stated, “Early indications are that an access panel door that is normally bolted shut had somehow been knocked loose and that Snooty was able to swim in.” Guyana Since the 19th century, Georgetown, Guyana has kept West Indian manatees in its botanical garden, and later, its national park. In the 1910s and again in the 1950s, sugar estates in Guyana used manatees to keep their irrigation canals weed-free. Between the 1950s and 1970s, the Georgetown water treatment plant used manatees in their storage canals for the same purpose. Culture The manatee has been linked to folklore on mermaids. In West African folklore, they were considered sacred and thought to have been once human. Killing one was taboo and required penance. In the novel Moby-Dick, Herman Melville distinguishes manatees ("Lamatins", cf. lamatins) from small whales; stating, "I am aware that down to the present time, the fish styled Lamatins and Dugongs (Pig-fish and Sow-fish of the Coffins of Nantucket) are included by many naturalists among the whales. But as these pig-fish are a noisy, contemptible set, mostly lurking in the mouths of rivers, and feeding on wet hay, and especially as they do not spout, I deny their credentials as whales; and have presented them with their passports to quit the Kingdom of Cetology." A manatee called Wardell appears in the Animal Crossing: New Horizons video game. He is part of a paid downloadable content expansion, managing and selling furniture to the player. In Rudyard Kipling's The White Seal (one of the stories in The Jungle Book), Sea Cow, about whom the story says that he has only six cervical vertebrae, is a manatee.
Biology and health sciences
Sirenia
Animals
20205
https://en.wikipedia.org/wiki/Marsupial
Marsupial
Marsupials are a diverse group of mammals belonging to the infraclass Marsupialia. They are natively found in Australasia, Wallacea, and the Americas. One of the defining features of marsupials is their unique reproductive strategy, where the young are born in a relatively undeveloped state and then nurtured within a pouch on their mother's abdomen. Living marsupials encompass a wide range of species, including kangaroos, koalas, opossums, possums, Tasmanian devils, wombats, wallabies, and bandicoots, among others. Marsupials constitute a clade stemming from the last common ancestor of extant Metatheria, which encompasses all mammals more closely related to marsupials than to placentals. This evolutionary split between placentals and marsupials occurred at least 125 million years ago, possibly dating back over 160 million years to the Middle Jurassic-Early Cretaceous period. Presently, close to 70% of the 334 extant species of marsupials are concentrated on the Australian continent, including mainland Australia, Tasmania, New Guinea, and nearby islands. The remaining 30% are distributed across the Americas, primarily in South America, with thirteen species in Central America and a single species, the Virginia opossum, inhabiting North America north of Mexico. Marsupials range in size from a few grams in the long-tailed planigale, to several tonnes in the extinct Diprotodon. The word marsupial comes from marsupium, the technical term for the abdominal pouch. It, in turn, is borrowed from the Latin and ultimately from the ancient Greek , meaning "pouch". Anatomy Marsupials have the typical characteristics of mammals—e.g., mammary glands, three middle ear bones, (and ears that usually have tragi, varying in hearing thresholds) and true hair. There are, however, striking differences as well as a number of anatomical features that separate them from eutherians. Most female marsupials have a front pouch, which contains multiple teats for the sustenance of their young. Marsupials also have other common structural features. Ossified patellae are absent in most modern marsupials (though a small number of exceptions are reported) and epipubic bones are present. Marsupials (and monotremes) also lack a gross communication (corpus callosum) between the right and left brain hemispheres. Skull and teeth Marsupials exhibit distinct cranial features compared to placentals. Generally, their skulls are relatively small and compact. Notably, they possess frontal holes known as foramen lacrimale situated at the front of the orbit. Marsupials also have enlarged cheekbones that extend further to the rear, and their lower jaw's angular extension (processus angularis) is bent inward toward the center. The hard palate of marsupials contains more openings compared to placentals. Teeth in marsupials also differ significantly from those in placentals. For instance, most Australian marsupials outside the order Diprotodontia have a varying number of incisors between their upper and lower jaws. Early marsupials had a dental formula of 5.1.3.4/4.1.3.4 per quadrant, consisting of five (maxillary) or four (mandibular) incisors, one canine, three premolars, and four molars, totaling 50 teeth. While some taxa, like the opossum, retain this original tooth count, others have reduced numbers. For instance, members of the Macropodidae family, including kangaroos and wallabies, have a dental formula of 3/1 – (0 or 1)/0 – 2/2 – 4/4. Many marsupials typically have between 40 and 50 teeth, which is notably more than most placentals. Notably, in marsupials, the second set of teeth only grows in at the site of the third premolar and posteriorly; all teeth anterior to this erupt initially as permanent teeth. Torso Few general characteristics describe their skeleton. In addition to unique details in the construction of the ankle, epipubic bones (ossa epubica) are observed projecting forward from the pubic bone of the pelvis. Since these are present in males and pouchless species, it is believed that they originally had nothing to do with reproduction, but served in the muscular approach to the movement of the hind limbs. This could be explained by an original feature of mammals, as these epipubic bones are also found in monotremes. Marsupial reproductive organs differ from the placentals. For them, the reproductive tract is doubled. The females have two uteri and two vaginas, and before birth, a birth canal forms between them, the median vagina. In most species, males have a split or double penis lying in front of the scrotum, which is not homologous to the scrotum of placentals. A pouch is present in most, but not all, species. Many marsupials have a permanent bag, whereas in others the pouch develops during gestation, as with the shrew opossum, where the young are hidden only by skin folds or in the fur of the mother. The arrangement of the pouch is variable to allow the offspring to receive maximum protection. Locomotive kangaroos have a pouch opening at the front, while many others that walk or climb on all fours have the opening in the back. Usually, only females have a pouch, but the male water opossum has a pouch that is used to accommodate his genitalia while swimming or running. General and convergences Marsupials have adapted to many habitats, reflected in the wide variety in their build. The largest living marsupial, the red kangaroo, grows up to in height and in weight, but extinct genera, such as Diprotodon, were significantly larger and heavier. The smallest members of this group are the marsupial mice, which often reach only in body length. Some species resemble placentals and are examples of convergent evolution. This convergence is evident in both brain evolution and behaviour. The extinct thylacine strongly resembled the placental wolf, hence one of its nicknames "Tasmanian wolf". The ability to glide evolved in both marsupials (as with sugar gliders) and some placentals (as with flying squirrels), which developed independently. Other groups such as the kangaroo, however, do not have clear placental counterparts, though they share similarities in lifestyle and ecological niches with ruminants. Body temperature Marsupials, along with monotremes (platypuses and echidnas), typically have lower body temperatures than similarly sized placentals (eutherians), with the averages being for marsupials and for placentals. Some species will bask to conserve energy Reproductive system Marsupials' reproductive systems differ markedly from those of placentals. During embryonic development, a choriovitelline placenta forms in all marsupials. In bandicoots, an additional chorioallantoic placenta forms, although it lacks the chorionic villi found in eutherian placentas. The evolution of reproduction in marsupials, and speculation about the ancestral state of mammalian reproduction, have engaged discussion since the end of the 19th century. Both sexes possess a cloaca, although modified by connecting to a urogenital sac and having a separate anal region in most species. The bladder of marsupials functions as a site to concentrate urine and empties into the common urogenital sinus in both females and males. Male reproductive system Most male marsupials, except for macropods and marsupial moles, have a bifurcated penis, separated into two columns, so that the penis has two ends corresponding to the females' two vaginas. The penis is used only during copulation, and is separate from the urinary tract. It curves forward when erect, and when not erect, it is retracted into the body in an S-shaped curve. Neither marsupials nor monotremes possess a baculum. The shape of the glans penis varies among marsupial species. The male thylacine had a pouch that acted as a protective sheath, covering his external reproductive organs while running through thick brush. The shape of the urethral grooves of the males' genitalia is used to distinguish between Monodelphis brevicaudata, Monodelphis domestica, and Monodelphis americana. The grooves form two separate channels that form the ventral and dorsal folds of the erectile tissue. Several species of dasyurid marsupials can also be distinguished by their penis morphology. The only accessory sex glands marsupials possess are the prostate and bulbourethral glands. Male marsupials have one to three pairs of bulbourethral glands. There are no ampullae of vas deferens, seminal vesicles or coagulating glands. The prostate is proportionally larger in marsupials than in placentals. During the breeding season, the male tammar wallaby's prostate and bulbourethral gland enlarge. However, there does not appear to be any seasonal difference in the weight of the testes. Female reproductive system Female marsupials have two lateral vaginas, which lead to separate uteri, but both open externally through the same orifice. A third canal, the median vagina, is used for birth. This canal can be transitory or permanent. Some marsupial species are able to store sperm in the oviduct after mating. Marsupials give birth at a very early stage of development; after birth, newborn marsupials crawl up the bodies of their mothers and attach themselves to a teat, which is located on the underside of the mother, either inside a pouch called the marsupium, or open to the environment. Mothers often lick their fur to leave a trail of scent for the newborn to follow to increase chances of making it into the marsupium. There they remain for a number of weeks, attached to the teat. The offspring are eventually able to leave the marsupium for short periods, returning to it for warmth, protection, and nourishment. Early development Prenatal development differs between marsupials and placentals. Key aspects of the first stages of placental embryo development, such as the inner cell mass and the process of compaction, are not found in marsupials. The cleavage stages of marsupial development are very variable between groups and aspects of marsupial early development are not yet fully understood. An infant marsupial is known as a joey. Marsupials have a very short gestation period—usually between 12.5 and 33 days, but as low as 10.7 days in the case of the stripe-faced dunnart and as long as 38 days for the long-nosed potoroo. The joey is born in an essentially fetal state, equivalent to an 8–12 week human fetus, blind, furless, and small in comparison to placental newborns with sizes ranging from 4g to over 800g. A newborn marsupial can be arranged into one of three grades of developmental complexity. Those who are the least developed at birth are found in dasyurids, intermediate ones are found in didelphids and peramelids, and the most developed are in macropods. Despite the lack of development it crawls across its mother's fur to make its way into the pouch, which acts like an external womb, where it latches onto a teat for food. It will not re-emerge for several months, during which time it is fully reliant on its mother's milk for essential nutrients, growth factors and immunological defence. Genes expressed in the eutherian placenta that are important for the later stages of fetal development are in female marsupials expressed in their mammary glands during their lactation period instead. After this period, the joey begins to spend increasing lengths of time out of the pouch, feeding and learning survival skills. However, it returns to the pouch to sleep, and if danger threatens, it will seek refuge in its mother's pouch for safety. An early birth removes a developing marsupial from its mother's body much sooner than in placentals; thus marsupials have not developed a complex placenta to protect the embryo from its mother's immune system. Though early birth puts the tiny newborn marsupial at greater environmental risk, it significantly reduces the dangers associated with long pregnancies, as there is no need to carry a large fetus to a full term in bad seasons. Marsupials are extremely altricial animals, needing to be intensely cared for immediately following birth (cf. precocial). Newborn marsupials lack histologically mature immune tissues and are highly reliant on their mother's immune system for immunological protection, as well as the milk. Newborn marsupials must climb up to their mother's teats and their front limbs and facial structures are much more developed than the rest of their bodies at the time of birth. This requirement has been argued to have resulted in the limited range of locomotor adaptations in marsupials compared to placentals. Marsupials must develop grasping forepaws during their early youth, making the evolutive transition from these limbs into hooves, wings, or flippers, as some groups of placentals have done, more difficult. However, several marsupials do possess atypical forelimb morphologies, such as the hooved forelimbs of the pig-footed bandicoot, suggesting that the range of forelimb specialization is not as limited as assumed. Joeys stay in the pouch for up to a year in some species, or until the next joey is born. A marsupial joey is unable to regulate its body temperature and relies upon an external heat source. Until the joey is well-furred and old enough to leave the pouch, a pouch temperature of must be constantly maintained. Joeys are born with "oral shields", which consist of soft tissue that reduces the mouth opening to a round hole just large enough to accept the mother's teat. Once inside the mouth, a bulbous swelling on the end of the teat attaches it to the offspring till it has grown large enough to let go. In species without pouches or with rudimentary pouches these are more developed than in forms with well-developed pouches, implying an increased role in maintaining the young attached to the mother's teat. Geography In Australasia, marsupials are found in Australia, Tasmania and New Guinea; throughout the Maluku Islands, Timor and Sulawesi to the west of New Guinea, and in the Bismarck Archipelago (including the Admiralty Islands) and Solomon Islands to the east of New Guinea. In the Americas, marsupials are found throughout South America, excluding the central/southern Andes and parts of Patagonia; and through Central America and south-central Mexico, with a single species (the Virginia opossum Didelphis virginiana) widespread in the eastern United States and along the Pacific coast. Interaction with Europeans The first American marsupial (and marsupial in general) that a European encountered was the common opossum. Vicente Yáñez Pinzón, commander of the Niña on Christopher Columbus' first voyage in the late fifteenth century, collected a female opossum with young in her pouch off the South American coast. He presented them to the Spanish monarchs, though by then the young were lost and the female had died. The animal was noted for its strange pouch or "second belly", and how the offspring reached the pouch was a mystery. On the other hand, it was the Portuguese who first described Australasian marsupials. António Galvão, a Portuguese administrator in Ternate (1536–1540), wrote a detailed account of the northern common cuscus (Phalanger orientalis): From the start of the 17th century, more accounts of marsupials arrived. For instance, a 1606 record of an animal, killed on the southern coast of New Guinea, described it as "in the shape of a dog, smaller than a greyhound", with a snakelike "bare scaly tail" and hanging testicles. The meat tasted like venison, and the stomach contained ginger leaves. This description appears to closely resemble the dusky pademelon (Thylogale brunii), in which case this would be the earliest European record of a member of the kangaroo family (Macropodidae). Taxonomy Marsupials are taxonomically identified as members of mammalian infraclass Marsupialia, first described as a family under the order Pollicata by German zoologist Johann Karl Wilhelm Illiger in his 1811 work Prodromus Systematis Mammalium et Avium. However, James Rennie, author of The Natural History of Monkeys, Opossums and Lemurs (1838), pointed out that the placement of five different groups of mammals – monkeys, lemurs, tarsiers, aye-ayes and marsupials (with the exception of kangaroos, that were placed under the order Salientia) – under a single order (Pollicata) did not appear to have a strong justification. In 1816, French zoologist George Cuvier classified all marsupials under the order Marsupialia. In 1997, researcher J. A. W. Kirsch and others accorded infraclass rank to Marsupialia. Classification With seven living orders in total, Marsupialia is further divided as follows: – Extinct Superorder Ameridelphia (American marsupials) Order Didelphimorphia (93 species) – see list of didelphimorphs Family Didelphidae: opossums Order Paucituberculata (seven species) Family Caenolestidae: shrew opossums Superorder Australidelphia (Australian marsupials) Order Microbiotheria (one extant species) Family Microbiotheriidae: monitos del monte Order †Yalkaparidontia (incertae sedis) Grandorder Agreodontia Order Dasyuromorphia (73 species) – see list of dasyuromorphs Family †Thylacinidae: thylacine Family Dasyuridae: antechinuses, quolls, dunnarts, Tasmanian devil, and relatives Family Myrmecobiidae: numbat Order Notoryctemorphia (two species) Family Notoryctidae: marsupial moles Order Peramelemorphia (27 species) Family Thylacomyidae: bilbies Family †Chaeropodidae: pig-footed bandicoots Family Peramelidae: bandicoots and allies Order Diprotodontia (136 species) – see list of diprotodonts Suborder Vombatiformes Family Vombatidae: wombats Family Phascolarctidae: koalas Family Diprotodontidae Family Palorchestidae: marsupial tapirs Family Thylacoleonidae: marsupial lions Suborder Phalangerida Infraorder Phalangeriformes – see list of phalangeriformes Family Acrobatidae: feathertail glider and feather-tailed possum Family Burramyidae: pygmy possums Family †Ektopodontidae: sprite possums Family Petauridae: striped possum, Leadbeater's possum, yellow-bellied glider, sugar glider, mahogany glider, squirrel glider Family Phalangeridae: brushtail possums and cuscuses Family Pseudocheiridae: ringtailed possums and relatives Family Tarsipedidae: honey possum Infraorder Macropodiformes – see list of macropodiformes Family Macropodidae: kangaroos, wallabies, and relatives Family Potoroidae: potoroos, rat kangaroos, bettongs Family Hypsiprymnodontidae: musky rat-kangaroo Family Balbaridae: basal quadrupedal kangaroos Evolutionary history Comprising over 300 extant species, several attempts have been made to accurately interpret the phylogenetic relationships among the different marsupial orders. Studies differ on whether Didelphimorphia or Paucituberculata is the sister group to all other marsupials. Though the order Microbiotheria (which has only one species, the monito del monte) is found in South America, morphological similarities suggest it is closely related to Australian marsupials. Molecular analyses in 2010 and 2011 identified Microbiotheria as the sister group to all Australian marsupials. However, the relations among the four Australidelphid orders are not as well understood. DNA evidence supports a South American origin for marsupials, with Australian marsupials arising from a single Gondwanan migration of marsupials from South America, across Antarctica, to Australia. There are many small arboreal species in each group. The term "opossum" is used to refer to American species (though "possum" is a common abbreviation), while similar Australian species are properly called "possums". The relationships among the three extant divisions of mammals (monotremes, marsupials, and placentals) were long a matter of debate among taxonomists. Most morphological evidence comparing traits such as number and arrangement of teeth and structure of the reproductive and waste elimination systems as well as most genetic and molecular evidence favors a closer evolutionary relationship between the marsupials and placentals than either has with the monotremes. The ancestors of marsupials, part of a larger group called metatherians, probably split from those of placentals (eutherians) during the mid-Jurassic period, though no fossil evidence of metatherians themselves are known from this time. From DNA and protein analyses, the time of divergence of the two lineages has been estimated to be around 100 to 120 mya. Fossil metatherians are distinguished from eutherians by the form of their teeth; metatherians possess four pairs of molar teeth in each jaw, whereas eutherian mammals (including true placentals) never have more than three pairs. Using this criterion, the earliest known metatherian was thought to be Sinodelphys szalayi, which lived in China around 125 mya. However Sinodelphys was later reinterpreted as an early member of Eutheria. The unequivocal oldest known metatherians are now 110 million years old fossils from western North America. Metatherians were widespread in North America and Asia during the Late Cretaceous, but suffered a severe decline during the end-Cretaceous extinction event. Cladogram from Wilson et al. (2016) In 2022, a study provided strong evidence that the earliest known marsupial was Deltatheridium known from specimens from the Campanian age of the Late Cretaceous in Mongolia. This study placed both Deltatheridium and Pucadelphys as sister taxa to the modern large American opossums. Marsupials spread to South America from North America during the Paleocene, possibly via the Aves Ridge. Northern Hemisphere metatherians, which were of low morphological and species diversity compared to contemporary placental mammals, eventually became extinct during the Miocene epoch. In South America, the opossums evolved and developed a strong presence, and the Paleogene also saw the evolution of shrew opossums (Paucituberculata) alongside non-marsupial metatherian predators such as the borhyaenids and the saber-toothed Thylacosmilus. South American niches for mammalian carnivores were dominated by these marsupial and sparassodont metatherians, which seem to have competitively excluded South American placentals from evolving carnivory. While placental predators were absent, the metatherians did have to contend with avian (terror bird) and terrestrial crocodylomorph competition. Marsupials were excluded in turn from large herbivore niches in South America by the presence of native placental ungulates (now extinct) and xenarthrans (whose largest forms are also extinct). South America and Antarctica remained connected until 35 mya, as shown by the unique fossils found there. North and South America were disconnected until about three million years ago, when the Isthmus of Panama formed. This led to the Great American Interchange. Sparassodonts disappeared for unclear reasons – again, this has classically assumed as competition from carnivoran placentals, but the last sparassodonts co-existed with a few small carnivorans like procyonids and canines, and disappeared long before the arrival of macropredatory forms like felines, while didelphimorphs (opossums) invaded Central America, with the Virginia opossum reaching as far north as Canada. Marsupials reached Australia via Antarctica during the Early Eocene, around 50 mya, shortly after Australia had split off. This suggests a single dispersion event of just one species, most likely a relative to South America's monito del monte (a microbiothere, the only New World australidelphian). This progenitor may have rafted across the widening, but still narrow, gap between Australia and Antarctica. The journey must not have been easy; South American ungulate and xenarthran remains have been found in Antarctica, but these groups did not reach Australia. In Australia, marsupials radiated into the wide variety seen today, including not only omnivorous and carnivorous forms such as were present in South America, but also into large herbivores. Modern marsupials appear to have reached the islands of New Guinea and Sulawesi relatively recently via Australia. A 2010 analysis of retroposon insertion sites in the nuclear DNA of a variety of marsupials has confirmed all living marsupials have South American ancestors. The branching sequence of marsupial orders indicated by the study puts Didelphimorphia in the most basal position, followed by Paucituberculata, then Microbiotheria, and ending with the radiation of Australian marsupials. This indicates that Australidelphia arose in South America, and reached Australia after Microbiotheria split off. In Australia, terrestrial placentals disappeared early in the Cenozoic (their most recent known fossils being 55 million-year-old teeth resembling those of condylarths) for reasons that are not clear, allowing marsupials to dominate the Australian ecosystem. Extant native Australian terrestrial placentals (such as hopping mice) are relatively recent immigrants, arriving via island hopping from Southeast Asia. Genetic analysis suggests a divergence date between the marsupials and the placentals at . The ancestral number of chromosomes has been estimated to be 2n = 14. A recent hypothesis suggests that South American microbiotheres resulted from a back-dispersal from eastern Gondwana. This interpretation is based on new cranial and post-cranial marsupial fossils of Djarthia murgonensis from the early Eocene Tingamarra Local Fauna in Australia that indicate this species is the most plesiomorphic ancestor, the oldest unequivocal australidelphian, and may be the ancestral morphotype of the Australian marsupial radiation. In 2023, imaging of a partial skeleton found in Australia by paleontologists from Flinders University led to the identification of Ambulator keanei, the first long-distance walker in Australia.
Biology and health sciences
Marsupials
null
20232
https://en.wikipedia.org/wiki/Messenger%20RNA
Messenger RNA
In molecular biology, messenger ribonucleic acid (mRNA) is a single-stranded molecule of RNA that corresponds to the genetic sequence of a gene, and is read by a ribosome in the process of synthesizing a protein. mRNA is created during the process of transcription, where an enzyme (RNA polymerase) converts the gene into primary transcript mRNA (also known as pre-mRNA). This pre-mRNA usually still contains introns, regions that will not go on to code for the final amino acid sequence. These are removed in the process of RNA splicing, leaving only exons, regions that will encode the protein. This exon sequence constitutes mature mRNA. Mature mRNA is then read by the ribosome, and the ribosome creates the protein utilizing amino acids carried by transfer RNA (tRNA). This process is known as translation. All of these processes form part of the central dogma of molecular biology, which describes the flow of genetic information in a biological system. As in DNA, genetic information in mRNA is contained in the sequence of nucleotides, which are arranged into codons consisting of three ribonucleotides each. Each codon codes for a specific amino acid, except the stop codons, which terminate protein synthesis. The translation of codons into amino acids requires two other types of RNA: transfer RNA, which recognizes the codon and provides the corresponding amino acid, and ribosomal RNA (rRNA), the central component of the ribosome's protein-manufacturing machinery. The concept of mRNA was developed by Sydney Brenner and Francis Crick in 1960 during a conversation with François Jacob. In 1961, mRNA was identified and described independently by one team consisting of Brenner, Jacob, and Matthew Meselson, and another team led by James Watson. While analyzing the data in preparation for publication, Jacob and Jacques Monod coined the name "messenger RNA". Synthesis The brief existence of an mRNA molecule begins with transcription, and ultimately ends in degradation. During its life, an mRNA molecule may also be processed, edited, and transported prior to translation. Eukaryotic mRNA molecules often require extensive processing and transport, while prokaryotic mRNA molecules do not. A molecule of eukaryotic mRNA and the proteins surrounding it are together called a messenger RNP. Transcription Transcription is when RNA is copied from DNA. During transcription, RNA polymerase makes a copy of a gene from the DNA to mRNA as needed. This process differs slightly in eukaryotes and prokaryotes. One notable difference is that prokaryotic RNA polymerase associates with DNA-processing enzymes during transcription so that processing can proceed during transcription. Therefore, this causes the new mRNA strand to become double stranded by producing a complementary strand known as the tRNA strand, which when combined are unable to form structures from base-pairing. Moreover, the template for mRNA is the complementary strand of tRNA, which is identical in sequence to the anticodon sequence that the DNA binds to. The short-lived, unprocessed or partially processed product is termed precursor mRNA, or pre-mRNA; once completely processed, it is termed mature mRNA. Uracil substitution for thymine mRNA uses uracil (U) instead of thymine (T) in DNA. uracil (U) is the complementary base to adenine (A) during transcription instead of thymine (T). Thus, when using a template strand of DNA to build RNA, thymine is replaced with uracil. This substitution allows the mRNA to carry the appropriate genetic information from DNA to the ribosome for translation. Regarding the natural history, uracil came first then thymine; evidence suggests that RNA came before DNA in evolution. The RNA World hypothesis proposes that life began with RNA molecules, before the emergence of DNA genomes and coded proteins. In DNA, the evolutionary substitution of thymine for uracil may have increased DNA stability and improved the efficiency of DNA replication. Eukaryotic pre-mRNA processing Processing of mRNA differs greatly among eukaryotes, bacteria, and archaea. Non-eukaryotic mRNA is, in essence, mature upon transcription and requires no processing, except in rare cases. Eukaryotic pre-mRNA, however, requires several processing steps before its transport to the cytoplasm and its translation by the ribosome. Splicing The extensive processing of eukaryotic pre-mRNA that leads to the mature mRNA is the RNA splicing, a mechanism by which introns or outrons (non-coding regions) are removed and exons (coding regions) are joined. 5' cap addition A 5' cap (also termed an RNA cap, an RNA 7-methylguanosine cap, or an RNA m7G cap) is a modified guanine nucleotide that has been added to the "front" or 5' end of a eukaryotic messenger RNA shortly after the start of transcription. The 5' cap consists of a terminal 7-methylguanosine residue that is linked through a 5'-5'-triphosphate bond to the first transcribed nucleotide. Its presence is critical for recognition by the ribosome and protection from RNases. Cap addition is coupled to transcription, and occurs co-transcriptionally, such that each influences the other. Shortly after the start of transcription, the 5' end of the mRNA being synthesized is bound by a cap-synthesizing complex associated with RNA polymerase. This enzymatic complex catalyzes the chemical reactions that are required for mRNA capping. Synthesis proceeds as a multi-step biochemical reaction. Editing In some instances, an mRNA will be edited, changing the nucleotide composition of that mRNA. An example in humans is the apolipoprotein B mRNA, which is edited in some tissues, but not others. The editing creates an early stop codon, which, upon translation, produces a shorter protein. Another well-defined example is A-to-I (adenosine to inosine) editing, which is carried out by double-strand specific adenosine-to inosine editing (ADAR) enzymes. This can occur in both the open reading frame and untranslated regions, altering the structural properties of the mRNA. Although essential for development, the exact role of this editing is not fully understood Polyadenylation Polyadenylation is the covalent linkage of a polyadenylyl moiety to a messenger RNA molecule. In eukaryotic organisms most messenger RNA (mRNA) molecules are polyadenylated at the 3' end, but recent studies have shown that short stretches of uridine (oligouridylation) are also common. The poly(A) tail and the protein bound to it aid in protecting mRNA from degradation by exonucleases. Polyadenylation is also important for transcription termination, export of the mRNA from the nucleus, and translation. mRNA can also be polyadenylated in prokaryotic organisms, where poly(A) tails act to facilitate, rather than impede, exonucleolytic degradation. Polyadenylation occurs during and/or immediately after transcription of DNA into RNA. After transcription has been terminated, the mRNA chain is cleaved through the action of an endonuclease complex associated with RNA polymerase. After the mRNA has been cleaved, around 250 adenosine residues are added to the free 3' end at the cleavage site. This reaction is catalyzed by polyadenylate polymerase. Just as in alternative splicing, there can be more than one polyadenylation variant of an mRNA. Polyadenylation site mutations also occur. The primary RNA transcript of a gene is cleaved at the poly-A addition site, and 100–200 A's are added to the 3' end of the RNA. If this site is altered, an abnormally long and unstable mRNA construct will be formed. Transport Another difference between eukaryotes and prokaryotes is mRNA transport. Because eukaryotic transcription and translation is compartmentally separated, eukaryotic mRNAs must be exported from the nucleus to the cytoplasm—a process that may be regulated by different signaling pathways. Mature mRNAs are recognized by their processed modifications and then exported through the nuclear pore by binding to the cap-binding proteins CBP20 and CBP80, as well as the transcription/export complex (TREX). Multiple mRNA export pathways have been identified in eukaryotes. In spatially complex cells, some mRNAs are transported to particular subcellular destinations. In mature neurons, certain mRNA are transported from the soma to dendrites. One site of mRNA translation is at polyribosomes selectively localized beneath synapses. The mRNA for Arc/Arg3.1 is induced by synaptic activity and localizes selectively near active synapses based on signals generated by NMDA receptors. Other mRNAs also move into dendrites in response to external stimuli, such as β-actin mRNA. For export from the nucleus, actin mRNA associates with ZBP1 and later with 40S subunit. The complex is bound by a motor protein and is transported to the target location (neurite extension) along the cytoskeleton. Eventually ZBP1 is phosphorylated by Src in order for translation to be initiated. In developing neurons, mRNAs are also transported into growing axons and especially growth cones. Many mRNAs are marked with so-called "zip codes", which target their transport to a specific location. mRNAs can also transfer between mammalian cells through structures called tunneling nanotubes. Translation Because prokaryotic mRNA does not need to be processed or transported, translation by the ribosome can begin immediately after the end of transcription. Therefore, it can be said that prokaryotic translation is coupled to transcription and occurs co-transcriptionally. Eukaryotic mRNA that has been processed and transported to the cytoplasm (i.e., mature mRNA) can then be translated by the ribosome. Translation may occur at ribosomes free-floating in the cytoplasm, or directed to the endoplasmic reticulum by the signal recognition particle. Therefore, unlike in prokaryotes, eukaryotic translation is not directly coupled to transcription. It is even possible in some contexts that reduced mRNA levels are accompanied by increased protein levels, as has been observed for mRNA/protein levels of EEF1A1 in breast cancer. Structure Coding regions Coding regions are composed of codons, which are decoded and translated into proteins by the ribosome; in eukaryotes usually into one and in prokaryotes usually into several. Coding regions begin with the start codon and end with a stop codon. In general, the start codon is an AUG triplet and the stop codon is UAG ("amber"), UAA ("ochre"), or UGA ("opal"). The coding regions tend to be stabilised by internal base pairs; this impedes degradation. In addition to being protein-coding, portions of coding regions may serve as regulatory sequences in the pre-mRNA as exonic splicing enhancers or exonic splicing silencers. Untranslated regions Untranslated regions (UTRs) are sections of the mRNA before the start codon and after the stop codon that are not translated, termed the five prime untranslated region (5' UTR) and three prime untranslated region (3' UTR), respectively. These regions are transcribed with the coding region and thus are exonic as they are present in the mature mRNA. Several roles in gene expression have been attributed to the untranslated regions, including mRNA stability, mRNA localization, and translational efficiency. The ability of a UTR to perform these functions depends on the sequence of the UTR and can differ between mRNAs. Genetic variants in 3' UTR have also been implicated in disease susceptibility because of the change in RNA structure and protein translation. The stability of mRNAs may be controlled by the 5' UTR and/or 3' UTR due to varying affinity for RNA degrading enzymes called ribonucleases and for ancillary proteins that can promote or inhibit RNA degradation. (
Biology and health sciences
Nucleic acids
Biology
20254
https://en.wikipedia.org/wiki/Miranda%20%28moon%29
Miranda (moon)
Miranda, also designated Uranus V, is the smallest and innermost of Uranus's five round satellites. It was discovered by Gerard Kuiper on 16 February 1948 at McDonald Observatory in Texas, and named after Miranda from William Shakespeare's play The Tempest. Like the other large moons of Uranus, Miranda orbits close to its planet's equatorial plane. Because Uranus orbits the Sun on its side, Miranda's orbit is nearly perpendicular to the ecliptic and shares Uranus's extreme seasonal cycle. At just in diameter, Miranda is one of the smallest closely observed objects in the Solar System that might be in hydrostatic equilibrium (spherical under its own gravity), and its total surface area is roughly equal to that of the U.S. state of Texas. The only close-up images of Miranda are from the Voyager 2 probe, which made observations of Miranda during its Uranus flyby in January 1986. During the flyby, Miranda's southern hemisphere pointed towards the Sun, so only that part was studied. Miranda probably formed from an accretion disc that surrounded the planet shortly after its formation and, like other large moons, it is likely differentiated, with an inner core of rock surrounded by a mantle of ice. Miranda has one of the most extreme and varied topographies of any object in the Solar System, including Verona Rupes, a roughly scarp that may be the highest cliff in the Solar System, and chevron-shaped tectonic features called coronae. The origin and evolution of this varied geology, the most of any Uranian satellite, are still not fully understood, and multiple hypotheses exist regarding Miranda's evolution. Discovery and name Miranda was discovered on 16 February 1948 by planetary astronomer Gerard Kuiper using the McDonald Observatory's Otto Struve Telescope. Its motion around Uranus was confirmed on 1 March 1948. It was the first satellite of Uranus discovered in nearly 100 years. Kuiper elected to name the object "Miranda" after the character in Shakespeare's The Tempest, because the four previously discovered moons of Uranus, Ariel, Umbriel, Titania, and Oberon, had all been named after characters of Shakespeare or Alexander Pope. However, the previous moons had been named specifically after fairies, whereas Miranda was a human. Subsequently discovered satellites of Uranus were named after characters from Shakespeare and Pope, whether fairies or not. The moon is also designated Uranus V. Orbit Of Uranus's five round satellites, Miranda orbits closest to it, at roughly 129 000 km from the surface; about a quarter again as far as its most distant ring. It is the round moon that has the smallest orbit around a major planet. Its orbital period is 34 hours and, like that of the Moon, is synchronous with its rotation period, which means it always shows the same face to Uranus, a condition known as tidal locking. Miranda's orbital inclination (4.34°) is unusually high for a body so close to its planet – roughly ten times that of the other major Uranian satellites, and 73 times that of Oberon. The reason for this is still uncertain; there are no mean-motion resonances between the moons that could explain it, leading to the hypothesis that the moons occasionally pass through secondary resonances, which at some point in the past led to Miranda being locked for a time into a 3:1 resonance with Umbriel, before chaotic behaviour induced by the secondary resonances moved it out again. In the Uranian system, due to the planet's lesser degree of oblateness and the larger relative size of its satellites, escape from a mean-motion resonance is much easier than for satellites of Jupiter or Saturn. Observation and exploration Miranda's apparent magnitude is +16.6, making it invisible to many amateur telescopes. Virtually all known information regarding its geology and geography was obtained during the flyby of Uranus made by Voyager 2 on 25 January 1986, The closest approach of Voyager 2 to Miranda was —significantly less than the distances to all other Uranian moons. Of all the Uranian satellites, Miranda had the most visible surface. The discovery team had expected Miranda to resemble Mimas, and found themselves at a loss to explain the moon's unique geography in the 24-hour window before releasing the images to the press. In 2017, as part of its Planetary Science Decadal Survey, NASA evaluated the possibility of an orbiter to return to Uranus some time in the 2020s. Uranus was the preferred destination over Neptune due to favourable planetary alignments meaning shorter flight times. Composition and internal structure At 1.15 g/cm3, Miranda is the least dense of Uranus's round satellites. That density suggests a composition of more than 60% water ice. Miranda's surface may be mostly water ice, though it is far rockier than its corresponding satellites in the Saturn system, indicating that heat from radioactive decay may have led to internal differentiation, allowing silicate rock and organic compounds to settle in its interior. Miranda is too small for any internal heat to have been retained over the age of the Solar System. Miranda is the least spherical of Uranus's satellites, with an equatorial diameter 3% wider than its polar diameter. Only water has been detected so far on Miranda's surface, though it has been speculated that methane, ammonia, carbon monoxide or nitrogen may also exist at 3% concentrations. These bulk properties are similar to Saturn's moon Mimas, though Mimas is smaller, less dense, and more oblate. A study published in 2024 suggests that Miranda might have had a liquid ocean of about 100 km thickness beneath the surface within the last 100-500 million years. Some studies argue that Miranda may still possess a subsurface ocean. Precisely how a body as small as Miranda could have enough internal energy to produce the myriad geological features seen on its surface has not been established with certainty, though the currently favoured hypothesis is that it was driven by tidal heating during a past time when it was in 3:1 orbital resonance with Umbriel. The resonance would have increased Miranda's orbital eccentricity to 0.1, and generated tidal friction due to the varying tidal forces from Uranus. As Miranda approached Uranus, tidal force increased; as it retreated, tidal force decreased, causing flexing that would have warmed Miranda's interior by 20 K, enough to trigger melting. The period of tidal flexing could have lasted for up to 100 million years. Also, if clathrate existed within Miranda, as has been hypothesised for the satellites of Uranus, it may have acted as an insulator, since it has a lower conductivity than water, increasing Miranda's temperature still further. Miranda may have also once been in a 5:3 orbital resonance with Ariel, which would have also contributed to its internal heating. However, the maximum heating attributable to the resonance with Umbriel was likely about three times greater. Geography Miranda has a unique surface. Among the geological structures that cover it are fractures, faults, valleys, craters, ridges, gorges, depressions, cliffs, and terraces. This moon is a mosaic of highly varied zones. Some areas are older and darker. As such, they bear numerous impact craters, as is expected of a small inert body. Other regions are made of rectangular or ovoid strips. They feature complex sets of parallel ridges and rupes (fault scarps) as well as numerous outcrops of bright and dark materials, suggesting an exotic composition. This moon is most likely composed only of water ice on the surface, as well as silicate rocks and other more or less buried organic compounds. Regiones The regiones identified on the images taken by the Voyager 2 probe are named "Mantua Regio", "Ephesus Regio", "Sicilia Regio", and "Dunsinane Regio". They designate major regions of Miranda where hilly terrain and plains follow one another, more or less dominated by ancient impact craters. Normal faults also mark these ancient regions. Some escarpments are as old as the formation of the regions while others are much more recent and appear to have formed after the coronae. These faults are accompanied by grabens characteristic of ancient tectonic activity. The surface of these regions is fairly uniformly dark. However, the cliffs bordering certain impact craters reveal, at depth, the presence of much more luminous material. Coronae Miranda is one of very few objects in the Solar system to have crowns (also called coronae). The three known coronae observed on Miranda are named Inverness Corona near the south pole, Arden Corona at the apex of the moon's orbital motion, and Elsinore Corona at the antapex. The highest albedo contrasts on Miranda's surface occur within the Inverness and Arden coronae. Inverness Corona Inverness Corona is a trapezoidal region of approximately on a side which lies near the south pole. This region is characterized by a central geological structure which takes the shape of a luminous chevron, a surface with a relatively high albedo, and a series of gorges which extend northwards from a point near the pole. At a latitude of around −55°, north-south oriented gorges tend to intersect with others, which follow an east-west direction. The outer boundary of Inverness, as well as its internal patterns of ridges and bands of contrasting albedos, form numerous salient angles. It is bounded on three sides (south, east and north) by a complex system of faults. The nature of the west coast is less clear, but may also be tectonic. Within the crown, the surface is dominated by parallel gorges spaced a few kilometers apart. The low number of impact craters indicates that Inverness is the youngest among the three coronae observed on the surface of Miranda. Arden Corona Arden Corona, present in the front hemisphere of Miranda, extends over approximately from east to west. The other dimension, however, remains unknown because the terrain extended beyond the terminator (on the hemisphere plunged into night) when Voyager 2 photographed it. The outer margin of this corona forms parallel and dark bands which surround in gentle curves a more clearly rectangular core at least wide. The overall effect has been described as an ovoid of lines. The interior and belt of Arden show very different morphologies. The interior topography appears regular and soft. It is also characterized by a mottled pattern resulting from large patches of relatively bright material scattered over a generally dark surface. The stratigraphic relationship between the light and dark marks could not be determined from the images provided by Voyager 2. The area at the margin of Arden is characterized by concentric albedo bands which extend from the western end of the crown where they intersect crateriform terrain (near 40° longitude) and on the side east, where they extend beyond the, in the northern hemisphere (near 110° longitude). The contrasting albedo bands are composed of outer fault scarp faces. This succession of escarpments gradually pushes the land into a deep hollow along the border between Arden and the crateriform terrain called Mantua Regio. Arden was formed during a geological episode which preceded the formation of Inverness but which is contemporary with the formation of Elsinore. Elsinore Corona Elsinore Corona is the third corona, which was observed in the rear hemisphere of Miranda, along the terminator. It is broadly similar to Arden in size and internal structure. They both have an outer belt about wide, which wraps around an inner core. The topography of the core of Elsinore consists of a complex set of intersections of troughs and bumps which are truncated by this outer belt which is marked by roughly concentric linear ridges. The troughs also include small segments of rolling, cratered terrain. Elsinore also presents segments of furrows, called "sulcus", comparable to those observed on Ganymede. Rupes Miranda also features enormous escarpments that can be traced across the moon. Some of them are older than the coronae, others younger. The most spectacular fault system begins at a deep valley visible at the terminator. This network of faults begins on the northwest side of Inverness where it forms a deep gorge on the outer edge of the ovoid which surrounds the crown. This geological formation is named "Argier Rupes". The most impressive fault then extends to the terminator, extending from the top of the central "chevron" of Inverness. Near the terminator, a gigantic luminous cliff, named Verona Rupes, forms complex grabens. The fault is approximately wide, the graben at the bright edge is 10 to deep. The height of the sheer cliff is 5 to . Although it could not be observed by the Voyager 2 probe on the face immersed in the polar night of Miranda, it is probable that this geological structure extends beyond the terminator in the northern hemisphere. Impact craters During the close flyby of Voyager 2 in January 1986, only the craters on the southern hemisphere of Miranda could be observed. They generally had diameters of over , representing the limit of resolution of the digital images transmitted by the probe during its flight. These craters have very varied morphologies. Some have well-defined borders and are sometimes surrounded by ejecta deposits characteristic of impact craters. Others are very degraded and sometimes barely recognizable, as their topography has been altered. The age of a crater does not give an indication of the date of formation of the terrain it marked. On the other hand, this date depends on the number of craters present on a site, regardless of their age. The more impact craters a terrain has, the older it is. Scientists use these as "planetary chronometers"; they count observed craters to date the formation of the terrain of inert natural satellites devoid of atmospheres, such as Callisto. No multiple ring crater, nor any complex crater with a central peak, has been observed on Miranda. Simple craters, that is to say whose cavity is bowl-shaped, and transitional craters (with a flat bottom) are the norm, with their diameter not correlated to their shape. Thus simple craters of more than are observed while elsewhere transitional craters of have been identified. Ejecta deposits are rare, and are never associated with craters larger than in diameter. The ejecta that sometimes surround craters with a diameter less than appear systematically brighter than the material surrounding them. On the other hand, ejecta associated with craters of size between and are generally darker than what surrounds them (the albedo of the ejecta is lower than that of the matter surrounding them). Finally, some ejecta deposits, associated with diameters of all sizes, have an albedo comparable to that of the material on which they rest. In regiones In some regiones, and particularly in those of the visible part of the anti-Uranian hemisphere (which continually turns its back on the planet), craters are very frequent. They are sometimes stuck to each other with very little space between each one. Elsewhere, craters are less frequent and are separated by large, weakly undulated surfaces. The rim of many craters is surrounded by luminous material while streaks of dark material are observed on the walls which surround the bottom of the craters. In Matuna Regio, between the craters Truncilo and Fransesco, there is a gigantic circular geological structure of in diameter which could be a basin impact very significantly degraded. These findings suggest that these regions contain a shiny material at shallow depth, while a layer of dark material (or a material which darkens upon contact with the external environment) is present, at greater depth. In coronae Craters are statistically up to ten times less numerous in the coronae than in the anti-Uranian regions, which indicates that these formations are younger. The density of impact craters could be established for different areas of Inverness, and made it possible to establish the age of each. Considering these measurements, the entire geological formation was formed in a relative unit of time. However, other observations make it possible to establish that the youngest zone, within this crown, is the one which separates the "chevron", from Argier Rupes. The density of impact craters in the core and in the Arden belt is statistically similar. The two distinct parts of this formation must therefore have been part of a common geological episode. Nevertheless, the superposition of craters on bands of the central core of Arden indicates that its formation preceded that of the scarps which surround it. The data from the impact craters can be interpreted as follows: the interior and marginal zones of the corona, including most of the albedo bands, were formed during the same period of time. Their formation was followed by later tectonic developments which produced the high-relief fault scarps observed along the edge of the crown near longitude 110°. The density of impact craters seems the same in the structure surrounding Elsinore as in its central core. The two zones of the crown seem to have formed during the same geological period, but other geological elements suggest that the perimeter of Elsinore is younger than its core. Other observations The number of craters should be higher in the hemisphere at the apex of the orbital movement than at the antapex. However, it is the anti-Uranian hemisphere which is densest in craters. This situation could be explained by a past event having caused a reorientation of Miranda's axis of rotation by 90° compared to that which is currently known. In this case, the paleoapex hemisphere of the moon would have become the current anti-Uranian hemisphere. However, the count of impact craters being limited to the southern hemisphere only, illuminated during the passage of the Voyager 2 probe, it is possible that Miranda has experienced a more complex reorientation and that its paleoapex is located somewhere in the northern hemisphere, which has not yet been photographed. Origin and formation Several scenarios are proposed to explain its formation and geological evolution. One of them postulates that it would result from the accretion of a disk of gas and dust called a "subnebula". This sub-nebula either existed around Uranus for some period of time after its formation, or was created following a cosmic impact which would have given its great obliquity to the axis of rotation of Uranus. However, this relatively small moon has areas that are surprisingly young compared to the geological time scale. It seems that the most recent geological formations only date back a few hundred million years. However, thermal models applicable to moons the size of Miranda predict rapid cooling and the absence of geological evolution following its accretion from the subnebula. Geological activity over such a long period cannot be justified either by the heat resulting from the initial accretion, nor by the heat generated by the fission of radioactive materials involved in the formation. Miranda has the youngest surface among those of the satellites of the Uranian system, which indicates that its geography has undergone the most important evolutions. This geography would be explained by a complex geological history including a still unknown combination of different astronomical phenomena. Among these phenomena would be tidal forces, mechanisms of orbital resonances, processes of partial differentiation, or even movements of convection. The geological patchwork could be partly the result of a catastrophic collision with an impactor. This event may have completely dislocated Miranda. The different pieces would then have re-assembled, then gradually reorganized in the spherical form that the Voyager 2 probe photographed. Some scientists even speak of several cycles of collision and re-accretion of the moon. This geological hypothesis was depreciated in 2011 in favor of hypotheses involving Uranian tidal forces. These would have pulled and turned the materials present under Inverness and Arden to create fault scarps. The stretching and distortion caused by Uranus's gravity, which alone could have provided the heat source necessary to power these uprisings. The oldest known regions on the surface of Miranda are cratered plains such as Sicilia Regio and Ephesus Regio. The formation of these terrains follows the accretion of the moon then its cooling. The bottoms of the oldest craters are thus partially covered with material from the depths of the moon referred to as endogenous resurfacing, which was a surprising observation. The geological youth of Miranda demonstrates that a heat source then took over from the initial heat provided by the accretion of the moon. The most satisfactory explanation for the origin of the heat which animated the moon is the one which also explains the volcanism on Io: a situation of orbital resonance now on Miranda and the important phenomenon of tidal forces generated by Uranus. After this first geological epoch, Miranda experienced a period of cooling which generated an overall extension of its core and produced fragments and cracks of its mantle on the surface, in the form of grabens. It is indeed possible that Miranda, Ariel, and Umbriel participated in several important resonances involving the pairs Miranda/Ariel, Ariel/Umbriel, and Miranda/Umbriel. Unlike those observed on Jupiter's moon Io, these orbital resonance phenomena between Miranda and Ariel could not lead to a stable capture of the small moon. Instead of being captured, Miranda's orbital resonance with Ariel and Umbriel may have led to the increase in its eccentricity and orbital inclination. By successively escaping several orbital resonances, Miranda alternated phases of heating and cooling. Thus all the known grabens of Miranda were not formed during this second geological episode. A third major geological epoch occurs with the orbital reorientation of Miranda and the formation of Elsinore and Arden coronae. A singular volcanic event, made of flows of solid materials, could then to have taken place, within the coronae in formation. Another explanation proposed for the formation of these two coronae would be the product of a diapir which would have formed in the heart of the moon. On this occasion Miranda would have at least partially differentiated. Considering the size and position of these coronae, it is possible that their formation contributed to changing the moment of inertia of the moon. This could have caused a 90° reorientation of Miranda. Doubt remains as to the concomitant existence of these two formations. It is possible that at this time, the moon was distorted to the point that its asphericity and eccentricity temporarily caused it to undergo a chaotic rotational movement, such as that observed on Hyperion. If Miranda's orbital reorientation occurred before the two coronae formed on the surface, then Elsinore would be older than Arden. Chaotic movement phenomena generated by the entry into 3:1 resonance between the orbit of Miranda and that of Umbriel could have contributed to an increase in Miranda's orbital inclination greater than 3°. A final geological episode consists of the formation of Inverness which seems to have induced surface tensions which gave rise to the creation of additional grabens including Verona Rupes and Argier Rupes. Following this new cooling of Miranda, its total volume could have increased by 4%. It is probable that these different geological episodes followed one another without interruption. Ultimately, Miranda's geological history may have spanned a period of more than 3 billion years. It would have started 3.5 billion years ago with the appearance of heavily cratered regions and ended a few hundred million years ago, with the formation of the coronae. The phenomena of orbital resonances, and mainly that associated with Umbriel, but also, to a lesser extent, that of Ariel, would have had a significant impact on the orbital eccentricity of Miranda, and would also have contributed to the internal heating and geological activity of the moon. The whole would have induced convection movements in its substrate and allowed the start of planetary differentiation. At the same time, these phenomena would have only slightly disturbed the orbits of the other moons involved, which are more massive than Miranda. However, Miranda's surface may appear too tortured to be the sole product of orbital resonance phenomena. After Miranda escaped from this resonance with Umbriel, through a mechanism that likely moved the moon into its current, abnormally high orbital tilt, the eccentricity would have been reduced. The tidal forces would then have erased the eccentricity and the temperature at the heart of the moon. This would have allowed it to regain a spherical shape, without allowing it to erase the impressive geological artifacts such as Verona Rupes. This eccentricity being the source of the tidal forces, its reduction would have deactivated the heat source which fueled the ancient geological activity of Miranda, making it a cold and inert moon.
Physical sciences
Solar System
Astronomy
20258
https://en.wikipedia.org/wiki/McIntosh%20%28apple%29
McIntosh (apple)
The McIntosh ( ), McIntosh Red, or colloquially the Mac, is an apple cultivar, the national apple of Canada. The fruit has red and green skin, a tart flavour, and tender white flesh, which ripens in late September. It is considered an all-purpose apple, suitable both for cooking and eating raw. In the 20th century, the McIntosh was the most popular cultivar in Eastern Canada and New England and was widely sold in the UK. However, after holding 40% of the Canadian market share in the 1960s through to 1996, its market share declined to 28% in 2014 and is expected to continue to do so, in part due to production cost and in part due to consumers favoring sweeter, crisper, and less tart apple varieties. John McIntosh discovered the original McIntosh sapling on his Dundela farm in Upper Canada in 1811. He and his wife cultivated it, and the family started grafting the tree and selling the fruit in 1835. In 1870, it entered commercial production, and became common in northeastern North America after 1900. While still important in production, the fruit's popularity fell in the early 21st century in the face of competition from varieties such as the Gala. According to the US Apple Association website, it is one of the fifteen most popular apple cultivars in the United States. Jef Raskin, an employee at Apple Computer, named the Macintosh computer line—later abbreviated to "Mac" in 1999—after the cultivar. Description The McIntosh, or McIntosh Red (nicknamed the "Mac"), is the most popular apple cultivar in eastern Canada and the northeastern United States. It also sells well in Eastern Europe. A spreading tree that is moderately vigorous, the McIntosh bears annually or in alternate years. The tree is hardy to at least USDA Hardiness zone 4a, or . 50% or more of its flowers die at or below. The McIntosh apple is a small to medium-sized round fruit with a short stem. It has a red and green skin that is thick, tender, and easy to peel. Its white flesh is sometime tinged with green or pink and is juicy, tender, and firm, soon becoming soft. The flesh is easily bruised. The fruit is considered "all-purpose", suitable both for eating raw and for cooking. It is used primarily for dessert, and requires less time to cook than most cultivars. It is usually blended when used for juice. The fruit grows best in cool areas where nights are cold and autumn days are clear; otherwise, it suffers from poor colour and soft flesh, and tends to fall from the tree before harvest. It stores for two to three months in air, but is prone to scald, flesh softening, chilling sensitivity, and coprinus rot. It can become mealy when stored at temperatures below . The fruit is optimally stored in a controlled atmosphere in which temperatures are between , and air content is 1.5–4.5% oxygen and 1–5% carbon dioxide; under such conditions, the McIntosh will keep for five to eight months. Cultivation The McIntosh is most commonly cultivated in Canada, the United States, and Eastern Europe. It is one of the top five apple cultivars used in cloning, and research indicates the McIntosh combines well for winter hardiness. If unsprayed, the McIntosh succumbs easily to apple scab, which may lead to entire crops being unmarketable. It has generally low susceptibility to fire blight, powdery mildew, cedar-apple rust, quince rust, and hawthorn rust. It is susceptible to fungal diseases such as Nectria canker, brown rot, black rot, race 1 of apple rust (but resists race 2). Furthermore, it is moderately resistant to Pezicula bark rot and Alternaria leaf blotch, and resists brown leaf spots well. The McIntosh is one of the most common cultivars used in apple breeding; a 1996 study found that the McIntosh was a parent in 101 of 439 cultivars selected, more than any other founding clone. It was used in over half of the Canadian cultivars selected, and was used extensively in the United States and Eastern Europe as well; rarely was it used elsewhere. Offspring of the McIntosh include: the Jersey Black hybrid the Macoun, the Newtown Pippin hybrid the Spartan, the Cortland; the Empire; the Jonamac, the Jersey Mac, the Lobo, the Melba, the Summered, the Tydeman's Red, and possibly the Paula Red. History Apple trees were introduced to Canada at the Habitation at Port-Royal as early as 1606 by French settlers. Following its introduction, apple cultivation spread inland. The McIntosh's discoverer, John McIntosh (1777 – ), left his native Mohawk Valley home in New York State in 1796 to follow his love, Dolly Irwin, who had been taken to Upper Canada by her Loyalist parents. She had died by the time he found her, but he settled as a farmer in Upper Canada. He married Hannah Doran in 1801, and they farmed along the Saint Lawrence River until 1811, when McIntosh exchanged the land he had with his brother-in-law Edward Doran for a plot in Dundela. While clearing the overgrown plot, McIntosh discovered some apple seedlings on his farm. Since the crabapple was the only native apple in North America before European settlement, it must have had European origins. The Snow Apple (or Fameuse) had been popular in Lower Canada before that time; the seedlings may have sprouted from discarded fruit. Fall St Lawrence and Alexander have also been proposed, but the parentage remains unknown. He transplanted the seedlings next to his house. One of the seedlings bore particularly good fruit. The McIntosh grandchildren dubbed the fruit it produced "Granny's apple", as they often saw their grandmother taking care of the tree in the orchard. McIntosh was selling seedlings from the tree by 1820, but they did not produce fruit of the quality of the original. John McIntosh's son Allan (1815–1899) learned grafting about 1835; with this cloning, the McIntoshes could maintain the distinctive properties of the fruit of the original tree. Allan and brother Sandy (1825–1906), nicknamed "Sandy the Grafter", increased production and promotion of the cultivar. Earliest sales were in 1835, and in 1836 the cultivar was renamed the "McIntosh Red"; it entered commercial production in 1870. The apple became popular after 1900, when the first sprays for apple scab were developed. A house fire damaged the original McIntosh tree in 1894; it last produced fruit in 1908, and died and fell over in 1910. Horticulturist William Tyrrell Macoun of the Central Experimental Farm in Ottawa is credited with popularizing the McIntosh in Canada. He stated the McIntosh needed "no words of praise", that it was "one of the finest appearing and best dessert apples grown". The Macoun, a hybrid of the McIntosh and Jersey Black grown by the Agricultural Experiment Station in Geneva, NY, was named for him in 1923. In the northeastern United States, the McIntosh replaced many Baldwins that were killed in a severe winter in 1933–34. In the late 1940s, Canadian ambassador to the United Nations Andrew McNaughton told the Soviet Minister for Foreign Affairs Andrei Gromyko that the McIntosh Red was Canada's best apple. The McIntosh made up 40% of the Canadian apple market by the 1960s; and at least thirty varieties of McIntosh hybrid were known by 1970. Its popularity later waned in the face of competition from imports; in the first decade of the 21st century, the Gala accounted for 33% of the apple market in Ontario to the McIntosh's 12%, and the Northern Spy had become the preferred apple for pies. Production remained important to Ontario, however, as of McIntoshes were produced in 2010. The original tree discovered by John McIntosh bore fruit for more than ninety years, and died in 1910. Horticulturalists from the Upper Canada Village heritage park saved cuttings from the last known first-generation McIntosh graft before it died in 2011 for producing clones. Descendant cultivars O.P. = Open Pollinated Cultural significance The McIntosh has been designated the national apple of Canada. A popular subscription funded a plaque placed from the original McIntosh tree in 1912. The Ontario Archaeological and Historic Sites Board replaced the plaque with a more descriptive one in 1962, and the Historic Sites and Monuments Board of Canada put up another in a park nearby in 2001, by a painted mural commemorating the fruit. Apple Inc. employee Jef Raskin named the Macintosh line of personal computers after the McIntosh. He deliberately misspelled the name to avoid conflict with the hi-fi equipment manufacturer McIntosh Laboratory. Apple's attempt in 1982 to trademark the name Macintosh was nevertheless denied due to the phonetic similarity between Apple's product and the name of the hi-fi manufacturer. Apple licensed the rights to the name in 1983, and bought the trademark in 1986. In 1995, the Royal Canadian Mint commissioned Toronto artist Roger Hill to design a commemorative silver dollar for release in 1996. Mint engraver Sheldon Beveridge engraved the image of a group of three McIntoshes and a McIntosh blossom, which adorn one side with a ribbon naming the variety. An inscription on the edge reads "1796 Canada Dollar 1996". Issued sheathed in a silver cardboard sleeve in a black leatherette case, 133,779 pieces of the proof were sold, as well as 58,834 pieces of the uncirculated version in a plastic capsule and silver sleeve.
Biology and health sciences
Pomes
Plants
20261
https://en.wikipedia.org/wiki/Machete
Machete
A machete (; ) is a broad blade used either as an agricultural implement similar to an axe, or in combat like a long-bladed knife. The blade is typically long and usually under thick. In the Spanish language, the word is possibly a diminutive form of the word macho, which was used to refer to sledgehammers. Alternatively, its origin may be machaera, the name given by the Greeks and Romans to the falcata. It is the origin of the English language equivalent term matchet, though this is rarely used. In much of the English-speaking Caribbean, such as Jamaica, Barbados, Guyana, Grenada, and Trinidad and Tobago, the term cutlass is used for these agricultural tools. Uses Agriculture In various tropical and subtropical countries, the machete is frequently used to cut through rainforest undergrowth and for agricultural purposes (e.g. cutting sugar cane). Besides this, in Latin America a common use is for such household tasks as cutting large foodstuffs into pieces—much as a cleaver is used—or to perform crude cutting tasks, such as making simple wooden handles for other tools. It is common to see people using machetes for other jobs, such as splitting open coconuts, yard work, removing small branches and plants, chopping animals' food, and clearing bushes. Machetes are often considered tools and used by adults. However, many hunter–gatherer societies and cultures surviving through subsistence agriculture begin teaching babies to use sharp tools, including machetes, before their first birthdays. Warfare People in uprisings sometimes use these weapons. For example, the Boricua Popular Army are unofficially called macheteros because of the machete-wielding laborers of sugar cane fields of past Puerto Rico. Many of the killings in the 1994 Rwandan genocide were performed with machetes, and they were the primary weapon used by the Interahamwe militias there. Machetes were also a distinctive tool and weapon of the Haitian Tonton Macoute. In 1762, the British captured Havana in a lengthy siege during the Seven Years' War. Volunteer militiamen led by Pepe Antonio, a Guanabacoa councilman, were issued with machetes during the unsuccessful defense of the city. The machete was also the most iconic weapon during the independence wars in Cuba, although it saw limited battlefield use. Carlos Manuel de Céspedes, owner of the sugar refinery La Demajagua near Manzanillo, freed his slaves on 10 October 1868. He proceeded to lead them, armed with machetes, in revolt against the Spanish government. The first cavalry charge using machetes as the primary weapon was carried out on 4 November 1868 by Máximo Gómez, a sergeant born in the Dominican Republic, who later became the general in chief of the Cuban Army. The machete is a common side arm and tool for many ethnic groups in West Africa. Machetes in this role are referenced in Chinua Achebe's Things Fall Apart. Some countries have a name for the blow of a machete; the Spanish machetazo is sometimes used in English. In the British Virgin Islands, Grenada, Jamaica, Saint Kitts and Nevis, Barbados, Saint Lucia, and Trinidad and Tobago, the word planass means to hit someone with the flat of the blade of a machete or cutlass. To strike with the sharpened edge is to "chop". Throughout the English-speaking islands of the Caribbean, the term 'cutlass' refers to a laborers' cutting tool. The Brazilian Army's Instruction Center on Jungle Warfare developed a machete-style knife with a blade in length and a very pronounced clip point. This machete is issued with a Bowie knife and a sharpening stone in the scabbard; collectively called a "jungle kit" (Conjunto de Selva in Portuguese); it is manufactured by Indústria de Material Bélico do Brasil (IMBEL). The machete was used as a weapon during the Mau Mau rebellion, in the Rwandan Genocide, and in South Africa, particularly in the 1980s and early 1990s when the former province of Natal was wracked by conflict between the African National Congress and the Zulu-nationalist Inkatha Freedom Party. Manufacture Good machetes rely on the materials used and the shape. In the past, the most famous manufacturer of machetes in Latin America and the Spanish-speaking Caribbean was Collins Company of Collinsville, Connecticut. The company was founded as Collins & Company in 1826 by Samuel W. Collins to make axes. Its first machetes were sold in 1845 and became so famous that a machete was called . In the English-speaking Caribbean, Robert Mole & Sons of Birmingham, England, was long considered the manufacturer of agricultural cutlasses of the best quality. Some Robert Mole blades survive as souvenirs of travellers to Trinidad, Jamaica, and, less commonly, St. Lucia. Colombia is the largest exporter of machetes worldwide. Cultural influence The flag of Angola features a machete, along with a cog-wheel. The southern Brazilian state of Rio Grande do Sul has a dance called the dança dos facões (machetes' dance) in which the dancers, who are usually men, bang their machetes against various surfaces while dancing, simulating a battle. Maculelê, an Afro-Brazilian dance and martial art, can also be performed with facões. This practice began in the city of Santo Amaro, Bahia, in the northeastern part of the country. In the Philippines, the bolo is used in training in eskrima, the indigenous martial art of the Philippines. In the Jalisco region of Mexico, Los Machetes is a popular folk dance. This dance tells the story of cutting down sugar cane during the harvest. Los Machetes was created by Mexican farm workers who spent a great amount of time perfecting the use of the tool, the machete, for harvesting. Traditionally, real machetes are used while performing this dance. Similar tools The panga or tapanga is a variant used in East and Southern Africa. This name may be of Swahili etymology; not to be confused with the panga fish. The panga blade broadens on the backside and has a length of . The upper inclined portion of the blade may be sharpened. Other similar tools include the parang and the golok (from Malaysia and Indonesia); however, these tend to have shorter, thicker blades with a primary grind, and are more effective on woody vegetation. The tsakat is a similar tool used in Armenia for clearing land of vegetation. Other similar tools include: Billhook Dusack Golok Kopis Kukri Seax Sorocaban Knife
Technology
Agricultural tools
null
20264
https://en.wikipedia.org/wiki/Mushroom
Mushroom
A mushroom or toadstool is the fleshy, spore-bearing fruiting body of a fungus, typically produced above ground, on soil, or on its food source. Toadstool generally denotes one poisonous to humans. The standard for the name "mushroom" is the cultivated white button mushroom, Agaricus bisporus; hence, the word "mushroom" is most often applied to those fungi (Basidiomycota, Agaricomycetes) that have a stem (stipe), a cap (pileus), and gills (lamellae, sing. lamella) on the underside of the cap. "Mushroom" also describes a variety of other gilled fungi, with or without stems; therefore the term is used to describe the fleshy fruiting bodies of some Ascomycota. The gills produce microscopic spores which help the fungus spread across the ground or its occupant surface. Forms deviating from the standard morphology usually have more specific names, such as "bolete", "puffball", "stinkhorn", and "morel", and gilled mushrooms themselves are often called "agarics" in reference to their similarity to Agaricus or their order Agaricales. By extension, the term "mushroom" can also refer to either the entire fungus when in culture, the thallus (called mycelium) of species forming the fruiting bodies called mushrooms, or the species itself. Etymology The terms "mushroom" and "toadstool" go back centuries and were never precisely defined, nor was there consensus on application. During the 15th and 16th centuries, the terms mushrom, mushrum, muscheron, mousheroms, mussheron, or musserouns were used. The term "mushroom" and its variations may have been derived from the French word mousseron in reference to moss (mousse). Delineation between edible and poisonous fungi is not clear-cut, so a "mushroom" may be edible, poisonous, or unpalatable. The word toadstool appeared first in 14th century England as a reference for a "stool" for toads, possibly implying an inedible poisonous fungus. Identification Identifying what is and is not a mushroom requires a basic understanding of their macroscopic structure. Most are basidiomycetes and gilled. Their spores, called basidiospores, are produced on the gills and fall in a fine rain of powder from under the caps as a result. At the microscopic level, the basidiospores are shot off basidia and then fall between the gills in the dead air space. As a result, for most mushrooms, if the cap is cut off and placed gill-side-down overnight, a powdery impression reflecting the shape of the gills (or pores, or spines, etc.) is formed (when the fruit body is sporulating). The color of the powdery print, called a spore print, is useful in both classifying and identifying mushrooms. Spore print colors include white (most common), brown, black, purple-brown, pink, yellow, and creamy, but almost never blue, green, or red. While modern identification of mushrooms is quickly becoming molecular, the standard methods for identification are still used by most and have developed into a fine art harking back to medieval times and the Victorian era, combined with microscopic examination. The presence of juices upon breaking, bruising-reactions, odors, tastes, shades of color, habitat, habit, and season are all considered by both amateur and professional mycologists. Tasting and smelling mushrooms carries its own hazards because of poisons and allergens. Chemical tests are also used for some genera. In general, identification to genus can often be accomplished in the field using a local field guide. Identification to species, however, requires more effort. A mushroom develops from a button stage into a mature structure, and only the latter can provide certain characteristics needed for the identification of the species. However, over-mature specimens lose features and cease producing spores. Many novices have mistaken humid water marks on paper for white spore prints, or discolored paper from oozing liquids on lamella edges for colored spored prints. Classification Typical mushrooms are the fruit bodies of members of the order Agaricales, whose type genus is Agaricus and type species is the field mushroom, Agaricus campestris. However in modern molecularly defined classifications, not all members of the order Agaricales produce mushroom fruit bodies, and many other gilled fungi, collectively called mushrooms, occur in other orders of the class Agaricomycetes. For example, chanterelles are in the Cantharellales, false chanterelles such as Gomphus are in the Gomphales, milk-cap mushrooms (Lactarius, Lactifluus) and russulas (Russula), as well as Lentinellus, are in the Russulales, while the tough, leathery genera Lentinus and Panus are among the Polyporales, but Neolentinus is in the Gloeophyllales, and the little pin-mushroom genus, Rickenella, along with similar genera, are in the Hymenochaetales. Within the main body of mushrooms, in the Agaricales, are common fungi like the common fairy-ring mushroom, shiitake, enoki, oyster mushrooms, fly agarics and other Amanitas, magic mushrooms like species of Psilocybe, paddy straw mushrooms, shaggy manes, etc. An atypical mushroom is the lobster mushroom, which is a fruitbody of a Russula or Lactarius mushroom that has been deformed by the parasitic fungus Hypomyces lactifluorum. This gives the affected mushroom an unusual shape and red color that resembles that of a boiled lobster. Other mushrooms are not gilled, so the term "mushroom" is loosely used, and giving a full account of their classifications is difficult. Some have pores underneath (and are usually called boletes), others have spines, such as the hedgehog mushroom and other tooth fungi, and so on. "Mushroom" has been used for polypores, puffballs, jelly fungi, coral fungi, bracket fungi, stinkhorns, and cup fungi. Thus, the term is more one of common application to macroscopic fungal fruiting bodies than one having precise taxonomic meaning. Approximately 14,000 species of mushrooms are described. Morphology A mushroom develops from a nodule, or pinhead, less than two millimeters in diameter, called a primordium, which is typically found on or near the surface of the substrate. It is formed within the mycelium, the mass of threadlike hyphae that make up the fungus. The primordium enlarges into a roundish structure of interwoven hyphae roughly resembling an egg, called a "button". The button has a cottony roll of mycelium, the universal veil, that surrounds the developing fruit body. As the egg expands, the universal veil ruptures and may remain as a cup, or volva, at the base of the stalk, or as warts or volval patches on the cap. Many mushrooms lack a universal veil, therefore they do not have either a volva or volval patches. Often, a second layer of tissue, the partial veil, covers the bladelike gills that bear spores. As the cap expands the veil breaks, and remnants of the partial veil may remain as a ring, or annulus, around the middle of the stalk or as fragments hanging from the margin of the cap. The ring may be skirt-like as in some species of Amanita, collar-like as in many species of Lepiota, or merely the faint remnants of a cortina (a partial veil composed of filaments resembling a spiderweb), which is typical of the genus Cortinarius. Mushrooms lacking partial veils do not form an annulus. The stalk (also called the stipe, or stem) may be central and support the cap in the middle, or it may be off-center or lateral, as in species of Pleurotus and Panus. In other mushrooms, a stalk may be absent, as in the polypores that form shelf-like brackets. Puffballs lack a stalk, but may have a supporting base. Other mushrooms including truffles, jellies, earthstars, and bird's nests usually do not have stalks, and a specialized mycological vocabulary exists to describe their parts. The way the gills attach to the top of the stalk is an important feature of mushroom morphology. Mushrooms in the genera Agaricus, Amanita, Lepiota and Pluteus, among others, have free gills that do not extend to the top of the stalk. Others have decurrent gills that extend down the stalk, as in the genera Omphalotus and Pleurotus. There are a great number of variations between the extremes of free and decurrent, collectively called attached gills. Finer distinctions are often made to distinguish the types of attached gills: adnate gills, which adjoin squarely to the stalk; notched gills, which are notched where they join the top of the stalk; adnexed gills, which curve upward to meet the stalk, and so on. These distinctions between attached gills are sometimes difficult to interpret, since gill attachment may change as the mushroom matures, or with different environmental conditions. Microscopic features A hymenium is a layer of microscopic spore-bearing cells that covers the surface of gills. In the nongilled mushrooms, the hymenium lines the inner surfaces of the tubes of boletes and polypores, or covers the teeth of spine fungi and the branches of corals. In the Ascomycota, spores develop within microscopic elongated, sac-like cells called asci, which typically contain eight spores in each ascus. The Discomycetes, which contain the cup, sponge, brain, and some club-like fungi, develop an exposed layer of asci, as on the inner surfaces of cup fungi or within the pits of morels. The Pyrenomycetes, tiny dark-colored fungi that live on a wide range of substrates including soil, dung, leaf litter, and decaying wood, as well as other fungi, produce minute, flask-shaped structures called perithecia, within which the asci develop. In the basidiomycetes, usually four spores develop on the tips of thin projections called sterigmata, which extend from club-shaped cells called a basidia. The fertile portion of the Gasteromycetes, called a gleba, may become powdery as in the puffballs or slimy as in the stinkhorns. Interspersed among the asci are threadlike sterile cells called paraphyses. Similar structures called cystidia often occur within the hymenium of the Basidiomycota. Many types of cystidia exist, and assessing their presence, shape, and size is often used to verify the identification of a mushroom. The most important microscopic feature for identification of mushrooms is the spores. Their color, shape, size, attachment, ornamentation, and reaction to chemical tests often can be the crux of an identification. A spore often has a protrusion at one end, called an apiculus, which is the point of attachment to the basidium, termed the apical germ pore, from which the hypha emerges when the spore germinates. Growth Many species of mushrooms seemingly appear overnight, growing or expanding rapidly. This phenomenon is the source of several common expressions in the English language including "to mushroom" or "mushrooming" (expanding rapidly in size or scope) and "to pop up like a mushroom" (to appear unexpectedly and quickly). In reality, all species of mushrooms take several days to form primordial mushroom fruit bodies, though they do expand rapidly by the absorption of fluids. The cultivated mushroom, as well as the common field mushroom, initially form a minute fruiting body, referred to as the pin stage because of their small size. Slightly expanded, they are called buttons, once again because of the relative size and shape. Once such stages are formed, the mushroom can rapidly pull in water from its mycelium and expand, mainly by inflating preformed cells that took several days to form in the primordia. Similarly, there are other mushrooms, like Parasola plicatilis (formerly Coprinus plicatlis), that grow rapidly overnight and may disappear by late afternoon on a hot day after rainfall. The primordia form at ground level in lawns in humid spaces under the thatch and after heavy rainfall or in dewy conditions balloon to full size in a few hours, release spores, and then collapse. Not all mushrooms expand overnight; some grow very slowly and add tissue to their fruiting bodies by growing from the edges of the colony or by inserting hyphae. For example, Pleurotus nebrodensis grows slowly, and because of this combined with human collection, it is now critically endangered. Though mushroom fruiting bodies are short-lived, the underlying mycelium can itself be long-lived and massive. A colony of Armillaria solidipes (formerly known as Armillaria ostoyae) in Malheur National Forest in the United States is estimated to be 2,400 years old, possibly older, and spans an estimated . Most of the fungus is underground and in decaying wood or dying tree roots in the form of white mycelia combined with black shoelace-like rhizomorphs that bridge colonized separated woody substrates. Nutrition Raw brown mushrooms are 92% water, 4% carbohydrates, 2% protein and less than 1% fat. In a amount, raw mushrooms provide 22 calories and are a rich source (20% or more of the Daily Value, DV) of B vitamins, such as riboflavin, niacin and pantothenic acid, selenium (37% DV) and copper (25% DV), and a moderate source (10–19% DV) of phosphorus, zinc and potassium (table). They have minimal or no vitamin C and sodium content. Vitamin D The vitamin D content of a mushroom depends on postharvest handling, in particular the unintended exposure to sunlight. The US Department of Agriculture provided evidence that UV-exposed mushrooms contain substantial amounts of vitamin D. When exposed to ultraviolet (UV) light, even after harvesting, ergosterol in mushrooms is converted to vitamin D2, a process now used intentionally to supply fresh vitamin D mushrooms for the functional food grocery market. In a comprehensive safety assessment of producing vitamin D in fresh mushrooms, researchers showed that artificial UV light technologies were equally effective for vitamin D production as in mushrooms exposed to natural sunlight, and that UV light has a long record of safe use for production of vitamin D in food. Human use Edible mushrooms Mushrooms are used extensively in cooking, in many cuisines (notably Chinese, Korean, European, and Japanese). Humans have valued them as food since antiquity. Most mushrooms sold in supermarkets have been commercially grown on mushroom farms. The most common of these, Agaricus bisporus, is considered safe for most people to eat because it is grown in controlled, sterilized environments. Several varieties of A. bisporus are grown commercially, including whites, crimini, and portobello. Other cultivated species available at many grocers include Hericium erinaceus, shiitake, maitake (hen-of-the-woods), Pleurotus, and enoki. In recent years, increasing affluence in developing countries has led to a considerable growth in interest in mushroom cultivation, which is now seen as a potentially important economic activity for small farmers. China is a major edible mushroom producer. The country produces about half of all cultivated mushrooms, and around of mushrooms are consumed per person per year by 1.4 billion people. In 2014, Poland was the world's largest mushroom exporter, reporting an estimated annually. Separating edible from poisonous species requires meticulous attention to detail; there is no single trait by which all toxic mushrooms can be identified, nor one by which all edible mushrooms can be identified. People who collect mushrooms for consumption are known as mycophagists, and the act of collecting them for such is known as mushroom hunting, or simply "mushrooming". Even edible mushrooms may produce allergic reactions in susceptible individuals, from a mild asthmatic response to severe anaphylactic shock. Even the cultivated A. bisporus contains small amounts of hydrazines, the most abundant of which is agaritine (a mycotoxin and carcinogen). However, the hydrazines are destroyed by moderate heat when cooking. A number of species of mushrooms are poisonous; although some resemble certain edible species, consuming them could be fatal. Eating mushrooms gathered in the wild is risky and should only be undertaken by individuals knowledgeable in mushroom identification. Common best practice is for wild mushroom pickers to focus on collecting a small number of visually distinctive, edible mushroom species that cannot be easily confused with poisonous varieties. Common mushroom hunting advice is that if a mushroom cannot be positively identified, it should be considered poisonous and not eaten. Toxic mushrooms Many mushroom species produce secondary metabolites that can be toxic, mind-altering, antibiotic, antiviral, or bioluminescent. Although there are only a small number of deadly species, several others can cause particularly severe and unpleasant symptoms. Toxicity likely plays a role in protecting the function of the basidiocarp: the mycelium has expended considerable energy and protoplasmic material to develop a structure to efficiently distribute its spores. One defense against consumption and premature destruction is the evolution of chemicals that render the mushroom inedible, either causing the consumer to vomit the meal (see emetics), or to learn to avoid consumption altogether. In addition, due to the propensity of mushrooms to absorb heavy metals, including those that are radioactive, as late as 2008, European mushrooms may have included toxicity from the 1986 Chernobyl disaster and continued to be studied. Psychoactive mushrooms Mushrooms with psychoactive properties have long played a role in various native medicine traditions in cultures all around the world. They have been used as sacrament in rituals aimed at mental and physical healing, and to facilitate visionary states. One such ritual is the velada ceremony. A practitioner of traditional mushroom use is the shaman or curandera (priest-healer). Psilocybin mushrooms, also referred to as psychedelic mushrooms, possess psychedelic properties. Commonly known as "magic mushrooms" or shrooms", they are openly available in smart shops in many parts of the world, or on the black market in those countries which have outlawed their sale. Psilocybin mushrooms have been reported to facilitate profound and life-changing insights often described as mystical experiences. Recent scientific work has supported these claims, as well as the long-lasting effects of such induced spiritual experiences. Psilocybin, a naturally occurring chemical in certain psychedelic mushrooms such as Psilocybe cubensis, is being studied for its ability to help people suffering from psychological disorders, such as obsessive–compulsive disorder. Minute amounts have been reported to stop cluster and migraine headaches. A double-blind study, done by Johns Hopkins Hospital, showed psychedelic mushrooms could provide people an experience with substantial personal meaning and spiritual significance. In the study, one third of the subjects reported ingestion of psychedelic mushrooms was the single most spiritually significant event of their lives. Over two-thirds reported it among their five most meaningful and spiritually significant events. On the other hand, one-third of the subjects reported extreme anxiety. However the anxiety went away after a short period of time. Psilocybin mushrooms have also shown to be successful in treating addiction, specifically with alcohol and cigarettes. A few species in the genus Amanita, most recognizably A. muscaria, but also A. pantherina, among others, contain the psychoactive compound muscimol. The muscimol-containing chemotaxonomic group of Amanitas contains no amatoxins or phallotoxins, and as such are not hepatoxic, though if not properly cured will be non-lethally neurotoxic due to the presence of ibotenic acid. The Amanita intoxication is similar to Z-drugs in that it includes CNS depressant and sedative-hypnotic effects, but also dissociation and delirium in high doses. Folk medicine Some mushrooms are used in folk medicine. In a few countries, extracts, such as polysaccharide-K, schizophyllan, polysaccharide peptide, or lentinan, are government-registered adjuvant cancer therapies, but clinical evidence for efficacy and safety of these extracts in humans has not been confirmed. Although some mushroom species or their extracts may be consumed for therapeutic effects, some regulatory agencies, such as the US Food and Drug Administration, regard such use as a dietary supplement, which does not have government approval or common clinical use as a prescription drug. Other uses Mushrooms can be used for dyeing wool and other natural fibers. The chromophores of mushroom dyes are organic compounds and produce strong and vivid colors, and all colors of the spectrum can be achieved with mushroom dyes. Before the invention of synthetic dyes, mushrooms were the source of many textile dyes. Some fungi, types of polypores loosely called mushrooms, have been used as fire starters (known as tinder fungi). Mushrooms and other fungi play a role in the development of new biological remediation techniques (e.g., using mycorrhizae to spur plant growth) and filtration technologies (e.g. using fungi to lower bacterial levels in contaminated water). There is an ongoing research in the field of genetic engineering aimed towards creation of the enhanced qualities of mushrooms for such domains as nutritional value enhancement, as well as medical use. Gallery
Biology and health sciences
Fungi
null
20266
https://en.wikipedia.org/wiki/Mainframe%20computer
Mainframe computer
A mainframe computer, informally called a mainframe or big iron, is a computer used primarily by large organizations for critical applications like bulk data processing for tasks such as censuses, industry and consumer statistics, enterprise resource planning, and large-scale transaction processing. A mainframe computer is large but not as large as a supercomputer and has more processing power than some other classes of computers, such as minicomputers, servers, workstations, and personal computers. Most large-scale computer-system architectures were established in the 1960s, but they continue to evolve. Mainframe computers are often used as servers. The term mainframe was derived from the large cabinet, called a main frame, that housed the central processing unit and main memory of early computers. Later, the term mainframe was used to distinguish high-end commercial computers from less powerful machines. Design Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output ("I/O") facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput Hot swapping of hardware, such as processors and memory The high stability and reliability of mainframes enable these machines to run uninterrupted for very long periods of time, with mean time between failures (MTBF) measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation are required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z (previously called z Systems, System z, and zSeries), Unisys Dorado, and Unisys Libra as among the most secure, with vulnerabilities in the low single digits, as compared to thousands for Windows, UNIX, and Linux. Software upgrades usually require setting up the operating system or portions thereof, and are non disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed. In the late 1950s, mainframes had only a rudimentary interactive interface (the console) and used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery. When interactive user terminals were introduced, they were used almost exclusively for applications (e.g. airline booking) rather than program development. However, in 1961 the first academic, general-purpose timesharing system that supported software development, CTSS, was released at MIT on an IBM 709, later 7090 and 7094. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although ultimately supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through keyboard/typewriter terminals and later character-mode text terminal CRT displays with integral keyboards, or finally from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported general purpose graphic display terminals, and terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes partially or entirely phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces. The infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, and reduced physical space requirements compared to server farms. Characteristics Modern mainframes can run multiple different instances of operating systems at the same time. This technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions. Modern mainframes, notably the IBM Z servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice, many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case), or with shared, geographically dispersed storage provided by EMC or Hitachi. Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the late 1950s, mainframe designs have included subsidiary hardware (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual. Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing. Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing. Current market IBM, with the IBM Z series, continues to be a major manufacturer in the mainframe market. In 2000, Hitachi co-developed the zSeries z900 with IBM to share expenses, and the latest Hitachi AP10000 models are made by IBM. Unisys manufactures ClearPath Libra mainframes, based on earlier Burroughs MCP products and ClearPath Dorado mainframes based on Sperry Univac OS 1100 product lines. Hewlett Packard Enterprise sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's GCOS, Stratus OpenVOS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe, and Fujitsu (formerly Amdahl) GS21 mainframes globally. NEC with ACOS and Hitachi with AP10000-VOS3 still maintain mainframe businesses in the Japanese market. The amount of vendor investment in mainframe development varies with market share. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems. Bull uses a mixture of Itanium and Xeon processors. NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series. IBM also develops custom processors in-house, such as the Telum. Unisys produces code compatible mainframe systems that range from laptops to cabinet-sized mainframes that use homegrown CPUs as well as Xeon processors. Furthermore, there exists a market for software applications to manage the performance of mainframe implementations. In addition to IBM, significant market competitors include BMC and Precisely; former competitors include Compuware and CA Technologies. Starting in the 2010s, cloud computing is now a less expensive, more scalable alternative. History Several manufacturers and their successors produced mainframe computers from the 1950s until the early 21st century, with gradually decreasing numbers and a gradual transition to simulation on Intel chips rather than proprietary hardware. The US group of manufacturers was first known as "IBM and the Seven Dwarfs": usually Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. While IBM's zSeries can still run 24-bit System/360 code, the 64-bit IBM Z CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the US were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of independently designed Soviet computers. Elwro in Poland was another Eastern Bloc manufacturer, producing the ODRA, R-32 and R-34 mainframes. Shrinking demand and tough competition started a shakeout in the market in the early 1970s—RCA sold out to UNIVAC and GE sold its business to Honeywell; between 1986 and 1990 Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1984 estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion) for the first time. IBM received the vast majority of mainframe revenue. During the 1980s, minicomputer-based systems grew more sophisticated and were able to displace the lower end of the mainframes. These computers, sometimes called departmental computers, were typified by the Digital Equipment Corporation VAX series. In 1991, AT&T Corporation briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorlds Stewart Alsop infamously predicted that the last mainframe would be unplugged in 1996; in 1993, he cited Cheryl Currid, a computer industry analyst as saying that the last mainframe "will stop working on December 31, 1999", a reference to the anticipated Year 2000 problem (Y2K). That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large-scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000, IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS (millions of instructions per second) shipments increased 4% per year over the past two years. Alsop had himself photographed in 2000, symbolically eating his own words ("death to the mainframe"). In 2012, NASA powered down its last mainframe, an IBM System z9. However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for I.B.M., and mainframes are still the back-office engines behind the world's financial markets and much of global commerce". , while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results". IBM has continued to launch new generations of mainframes: the IBM z13 in 2015, the z14 in 2017, the z15 in 2019, and the z16 in 2022, the latter featuring among other things an "integrated on-chip AI accelerator" and the new Telum microprocessor. Differences from supercomputers A supercomputer is a computer at the leading edge of data processing capability, with respect to calculation speed. Supercomputers are used for scientific and engineering problems (high-performance computing) which crunch numbers and data, while mainframes focus on transaction processing. The differences are: Mainframes are built to be reliable for transaction processing (measured by TPC-metrics; not used or helpful for most supercomputing applications) as it is commonly understood in the business world: the commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council, updates a database system for inventory control (goods), airline reservations (services), or banking (money) by adding a record. A transaction may refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another which is not measured by the processing speed of the CPU. Transaction processing is not exclusive to mainframes but is also used by microprocessor-based servers and online networks. Supercomputer performance is measured in floating point operations per second (FLOPS) or in traversed edges per second or TEPS, metrics that are not very meaningful for mainframe applications, while mainframes are sometimes measured in millions of instructions per second (MIPS), although the definition depends on the instruction mix measured. Examples of integer operations measured by MIPS include adding numbers together, checking values or moving data around in memory (while moving information to and from storage, so-called I/O is most helpful for mainframes; and within memory, only helping indirectly). Floating point operations are mostly addition, subtraction, and multiplication (of binary floating point in supercomputers; measured by FLOPS) with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations (only recently standardized decimal floating point, not used in supercomputers, are appropriate for monetary values such as those useful for mainframe applications). In terms of computational speed, supercomputers are more powerful. Mainframes and supercomputers cannot always be clearly distinguished; up until the early 1990s, many supercomputers were based on a mainframe architecture with supercomputing extensions. An example of such a system is the HITAC S-3800, which was instruction-set compatible with IBM System/370 mainframes, and could run the Hitachi VOS3 operating system (a fork of IBM MVS). The S-3800 therefore can be seen as being both simultaneously a supercomputer and also an IBM-compatible mainframe. In 2007, an amalgamation of the different technologies and architectures for supercomputers and mainframes has led to a so-called gameframe.
Technology
Computer hardware
null
20268
https://en.wikipedia.org/wiki/Microsoft%20Excel
Microsoft Excel
Microsoft Excel is a spreadsheet editor developed by Microsoft for Windows, macOS, Android, iOS and iPadOS. It features calculation or computation capabilities, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications (VBA). Excel forms part of the Microsoft 365 and Microsoft Office suites of software and has been developed since 1985. Features Basic operation Microsoft Excel has the basic features of all spreadsheets, using a grid of cells arranged in numbered rows and letter-named columns to organize data manipulations like arithmetic operations. It has a battery of supplied functions to answer statistical, engineering, and financial needs. In addition, it can display data as line graphs, histograms and charts, and with a very limited three-dimensional graphical display. It allows sectioning of data to view its dependencies on various factors for different perspectives (using pivot tables and the scenario manager). A PivotTable is a tool for data analysis. It does this by simplifying large data sets via PivotTable fields. It has a programming aspect, Visual Basic for Applications, allowing the user to employ a wide variety of numerical methods, for example, for solving differential equations of mathematical physics, and then reporting the results back to the spreadsheet. It also has a variety of interactive features allowing user interfaces that can completely hide the spreadsheet from the user, so the spreadsheet presents itself as a so-called application, or decision support system (DSS), via a custom-designed user interface, for example, a stock analyzer, or in general, as a design tool that asks the user questions and provides answers and reports. In a more elaborate realization, an Excel application can automatically poll external databases and measuring instruments using an update schedule, analyze the results, make a Word report or PowerPoint slide show, and e-mail these presentations on a regular basis to a list of participants. Microsoft allows for a number of optional command-line switches to control the manner in which Excel starts. Functions Excel 2016 has 484 functions. Of these, 360 existed prior to Excel 2010. Microsoft classifies these functions into 14 categories. Of the 484 current functions, 386 may be called from VBA as methods of the object "WorksheetFunction" and 44 have the same names as VBA functions. With the introduction of LAMBDA, Excel became Turing complete. Macro programming VBA programming The Windows version of Excel supports programming through Microsoft's Visual Basic for Applications (VBA), which is a dialect of Visual Basic. Programming with VBA allows spreadsheet manipulation that is awkward or impossible with standard spreadsheet techniques. Programmers may write code directly using the Visual Basic Editor (VBE), which includes a window for writing code, debugging code, and code module organization environment. The user can implement numerical methods as well as automating tasks such as formatting or data organization in VBA and guide the calculation using any desired intermediate results reported back to the spreadsheet. VBA was removed from Mac Excel 2008, as the developers did not believe that a timely release would allow porting the VBA engine natively to Mac OS X. VBA was restored in the next version, Mac Excel 2011, although the build lacks support for ActiveX objects, impacting some high level developer tools. A common and easy way to generate VBA code is by using the Macro Recorder. The Macro Recorder records actions of the user and generates VBA code in the form of a macro. These actions can then be repeated automatically by running the macro. The macros can also be linked to different trigger types like keyboard shortcuts, a command button or a graphic. The actions in the macro can be executed from these trigger types or from the generic toolbar options. The VBA code of the macro can also be edited in the VBE. Certain features such as loop functions and screen prompt by their own properties, and some graphical display items, cannot be recorded but must be entered into the VBA module directly by the programmer. Advanced users can employ user prompts to create an interactive program, or react to events such as sheets being loaded or changed. Macro Recorded code may not be compatible with Excel versions. Some code that is used in Excel 2010 cannot be used in Excel 2003. Making a Macro that changes the cell colors and making changes to other aspects of cells may not be backward compatible. VBA code interacts with the spreadsheet through the Excel Object Model, a vocabulary identifying spreadsheet objects, and a set of supplied functions or methods that enable reading and writing to the spreadsheet and interaction with its users (for example, through custom toolbars or command bars and message boxes). User-created VBA subroutines execute these actions and operate like macros generated using the macro recorder, but are more flexible and efficient. History From its first version Excel supported end-user programming of macros (automation of repetitive tasks) and user-defined functions (extension of Excel's built-in function library). In early versions of Excel, these programs were written in a macro language whose statements had formula syntax and resided in the cells of special-purpose macro sheets (stored with file extension .XLM in Windows.) XLM was the default macro language for Excel through Excel 4.0. Beginning with version 5.0 Excel recorded macros in VBA by default but with version 5.0 XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel, including Excel 2021, are capable of running an XLM macro, though Microsoft discourages their use. Python programming In 2023 Microsoft announced Excel would support the Python programming language directly. As of January 2024, Python in Excel is available in the Microsoft 365 Insider Program. Charts Excel supports charts, graphs, or histograms generated from specified groups of cells. It also supports Pivot Charts that allow for a chart to be linked directly to a Pivot table. This allows the chart to be refreshed with the Pivot Table. The generated graphic component can either be embedded within the current sheet or added as a separate object. These displays are dynamically updated if the content of cells changes. For example, suppose that the important design requirements are displayed visually; then, in response to a user's change in trial values for parameters, the curves describing the design change shape, and their points of intersection shift, assisting the selection of the best design. Add-ins Additional features are available using add-ins. Several are provided with Excel, including: Analysis ToolPak: Provides data analysis tools for statistical and engineering analysis (includes analysis of variance and regression analysis) Analysis ToolPak VBA: VBA functions for Analysis ToolPak Euro Currency Tools: Conversion and formatting for euro currency Solver Add-In: Tools for optimization and equation solving Data storage and communication Number of rows and columns Versions of Excel up to 7.0 had a limitation in the size of their data sets of 16K (214 = ) rows. Versions 8.0 through 11.0 could handle 64K (216 = ) rows and 256 columns (28 as label 'IV'). Version 12.0 onwards, including the current Version 16.x, can handle over 1M (220 = ) rows, and (214, labeled as column 'XFD') columns. File formats Up until the 2007 version, Microsoft Excel used a proprietary binary file format called Excel Binary File Format (.XLS) as its primary format. Excel 2007 uses Office Open XML as its primary file format, an XML-based format that followed after a previous XML-based format called "XML Spreadsheet" ("XMLSS"), first introduced in Excel 2002. Although supporting and encouraging the use of new XML-based formats as replacements, Excel 2007 remained backwards-compatible with the traditional, binary formats. In addition, most versions of Microsoft Excel can read CSV, DBF, SYLK, DIF, and other legacy formats. Support for some older file formats was removed in Excel 2007. The file formats were mainly from DOS-based programs. Binary OpenOffice.org has created documentation of the Excel format. Two epochs of the format exist: the 97-2003 OLE format, and the older stream format. Microsoft has made the Excel binary format specification available to freely download. XML Spreadsheet The XML Spreadsheet format introduced in Excel 2002 is a simple, XML based format missing some more advanced features like storage of VBA macros. Though the intended file extension for this format is .xml, the program also correctly handles XML files with .xls extension. This feature is widely used by third-party applications (e.g. MySQL Query Browser) to offer "export to Excel" capabilities without implementing binary file format. The following example will be correctly opened by Excel if saved either as Book1.xml or Book1.xls: <?xml version="1.0"?> <Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet" xmlns:html="http://www.w3.org/TR/REC-html40"> <Worksheet ss:Name="Sheet1"> <Table ss:ExpandedColumnCount="2" ss:ExpandedRowCount="2" x:FullColumns="1" x:FullRows="1"> <Row> <Cell><Data ss:Type="String">Name</Data></Cell> <Cell><Data ss:Type="String">Example</Data></Cell> </Row> <Row> <Cell><Data ss:Type="String">Value</Data></Cell> <Cell><Data ss:Type="Number">123</Data></Cell> </Row> </Table> </Worksheet> </Workbook> Current file extensions Microsoft Excel 2007, along with the other products in the Microsoft Office 2007 suite, introduced new file formats. The first of these (.xlsx) is defined in the Office Open XML (OOXML) specification. Old file extensions Using other Windows applications Windows applications such as Microsoft Access and Microsoft Word, as well as Excel can communicate with each other and use each other's capabilities. The most common is Dynamic Data Exchange: although strongly deprecated by Microsoft, this is a common method to send data between applications running on Windows, with official MS publications referring to it as "the protocol from hell". As the name suggests, it allows applications to supply data to others for calculation and display. It is very common in financial markets, being used to connect to important financial data services such as Bloomberg and Reuters. OLE Object Linking and Embedding allows a Windows application to control another to enable it to format or calculate data. This may take on the form of "embedding" where an application uses another to handle a task that it is more suited to, for example a PowerPoint presentation may be embedded in an Excel spreadsheet or vice versa. Using external data Excel users can access external data sources via Microsoft Office features such as (for example) connections built with the Office Data Connection file format. Excel files themselves may be updated using a Microsoft supplied ODBC driver. Excel can accept data in real-time through several programming interfaces, which allow it to communicate with many data sources such as Bloomberg and Reuters (through addins such as Power Plus Pro). DDE: "Dynamic Data Exchange" uses the message passing mechanism in Windows to allow data to flow between Excel and other applications. Although it is easy for users to create such links, programming such links reliably is so difficult that Microsoft, the creators of the system, officially refer to it as "the protocol from hell". In spite of its many issues DDE remains the most common way for data to reach traders in financial markets. Network DDE Extended the protocol to allow spreadsheets on different computers to exchange data. Starting with Windows Vista, Microsoft no longer supports the facility. Real Time Data: RTD although in many ways technically superior to DDE, has been slow to gain acceptance, since it requires non-trivial programming skills, and when first released was neither adequately documented nor supported by the major data vendors. Alternatively, Microsoft Query provides ODBC-based browsing within Microsoft Excel. Export and migration of spreadsheets Programmers have produced APIs to open Excel spreadsheets in a variety of applications and environments other than Microsoft Excel. These include opening Excel documents on the web using either ActiveX controls, or plugins like the Adobe Flash Player. The Apache POI open-source project provides Java libraries for reading and writing Excel spreadsheet files. Password protection Microsoft Excel protection offers several types of passwords: Password to open a document Password to modify a document Password to unprotect the worksheet Password to protect workbook Password to protect the sharing workbook All passwords except password to open a document can be removed instantly regardless of the Microsoft Excel version used to create the document. These types of passwords are used primarily for shared work on a document. Such password-protected documents are not encrypted, and data sources from a set password are saved in a document's header. Password to protect workbook is an exception – when it is set, a document is encrypted with the standard password "VelvetSweatshop", but since it is known to the public, it actually does not add any extra protection to the document. The only type of password that can prevent a trespasser from gaining access to a document is the password to open a document. The cryptographic strength of this kind of protection depends strongly on the Microsoft Excel version that was used to create the document. In Microsoft Excel 95 and earlier versions, the password to open is converted to a 16-bit key that can be instantly cracked. In Excel 97/2000 the password is converted to a 40-bit key, which can also be cracked very quickly using modern equipment. As regards services that use rainbow tables (e.g. Password-Find), it takes up to several seconds to remove protection. In addition, password-cracking programs can brute-force attack passwords at a rate of hundreds of thousands of passwords a second, which not only lets them decrypt a document but also find the original password. In Excel 2003/XP the encryption is slightly better – a user can choose any encryption algorithm that is available in the system (see Cryptographic Service Provider). Due to the CSP, an Excel file cannot be decrypted, and thus the password to open cannot be removed, though the brute-force attack speed remains quite high. Nevertheless, the older Excel 97/2000 algorithm is set by the default. Therefore, users who do not change the default settings lack reliable protection of their documents. The situation changed fundamentally in Excel 2007, where the modern AES algorithm with a key of 128 bits started being used for decryption, and a 50,000-fold use of the hash function SHA1 reduced the speed of brute-force attacks down to hundreds of passwords per second. In Excel 2010, the strength of the protection by the default was increased two times due to the use of a 100,000-fold SHA1 to convert a password to a key. Other platforms Excel for mobile Excel Mobile is a spreadsheet program that can edit XLSX files. It can edit and format text in cells, calculate formulas, search within the spreadsheet, sort rows and columns, freeze panes, filter the columns, add comments, and create charts. It cannot add columns or rows except at the edge of the document, rearrange columns or rows, delete rows or columns, or add spreadsheet tabs. The 2007 version has the ability to use a full-screen mode to deal with limited screen resolution, as well as split panes to view different parts of a worksheet at one time. Protection settings, zoom settings, autofilter settings, certain chart formatting, hidden sheets, and other features are not supported on Excel Mobile, and will be modified upon opening and saving a workbook. In 2015, Excel Mobile became available for Windows 10 and Windows 10 Mobile on Windows Store. Excel for the web Excel for the web is a free lightweight version of Microsoft Excel available as part of Office on the web, which also includes web versions of Microsoft Word and Microsoft PowerPoint. Excel for the web can display most of the features available in the desktop versions of Excel, although it may not be able to insert or edit them. Certain data connections are not accessible on Excel for the web, including with charts that may use these external connections. Excel for the web also cannot display legacy features, such as Excel 4.0 macros or Excel 5.0 dialog sheets. There are also small differences between how some of the Excel functions work. Microsoft Excel Viewer Microsoft Excel Viewer was a freeware program for Microsoft Windows for viewing and printing spreadsheet documents created by Excel. Microsoft retired the viewer in April 2018 with the last security update released in February 2019 for Excel Viewer 2007 (SP3). The first version released by Microsoft was Excel 97 Viewer. Excel 97 Viewer was supported in Windows CE for Handheld PCs. In October 2004, Microsoft released Excel Viewer 2003. In September 2007, Microsoft released Excel Viewer 2003 Service Pack 3 (SP3). In January 2008, Microsoft released Excel Viewer 2007 (featuring a non-collapsible Ribbon interface). In April 2009, Microsoft released Excel Viewer 2007 Service Pack 2 (SP2). In October 2011, Microsoft released Excel Viewer 2007 Service Pack 3 (SP3). Microsoft advises to view and print Excel files for free to use the Excel Mobile application for Windows 10 and for Windows 7 and Windows 8 to upload the file to OneDrive and use Excel for the web with a Microsoft account to open them in a browser. Limitations and errors In addition to issues with spreadsheets in general, other problems specific to Excel include numeric precision, misleading statistics functions, mod function errors, date limitations and more. Numeric precision Despite the use of 15-figure precision, Excel can display many more figures (up to thirty) upon user request. But the displayed figures are not those actually used in its computations, and so, for example, the difference of two numbers may differ from the difference of their displayed values. Although such departures are usually beyond the 15th decimal, exceptions do occur, especially for very large or very small numbers. Serious errors can occur if decisions are made based upon automated comparisons of numbers (for example, using the Excel If function), as equality of two numbers can be unpredictable. In the figure, the fraction 1/9000 is displayed in Excel. Although this number has a decimal representation that is an infinite string of ones, Excel displays only the leading 15 figures. In the second line, the number one is added to the fraction, and again Excel displays only 15 figures. In the third line, one is subtracted from the sum using Excel. Because the sum in the second line has only eleven 1's after the decimal, the difference when 1 is subtracted from this displayed value is three 0's followed by a string of eleven 1's. However, the difference reported by Excel in the third line is three 0's followed by a string of thirteen 1's and two extra erroneous digits. This is because Excel calculates with about half a digit more than it displays. Excel works with a modified 1985 version of the IEEE 754 specification. Excel's implementation involves conversions between binary and decimal representations, leading to accuracy that is on average better than one would expect from simple fifteen digit precision, but that can be worse. See the main article for details. Besides accuracy in user computations, the question of accuracy in Excel-provided functions may be raised. Particularly in the arena of statistical functions, Excel has been criticized for sacrificing accuracy for speed of calculation. As many calculations in Excel are executed using VBA, an additional issue is the accuracy of VBA, which varies with variable type and user-requested precision. Statistical functions The accuracy and convenience of statistical tools in Excel has been criticized, as mishandling situations when data is missing, as returning incorrect values due to inept handling of round-off and large numbers, as only selectively updating calculations on a spreadsheet when some cell values are changed, and as having a limited set of statistical tools. Microsoft has announced that some of these issues are addressed in Excel 2010. Excel MOD function error Excel has issues with modulo operations. In the case of excessively large results, Excel will return the error warning instead of an answer. Fictional leap day in 1900 Excel includes February 29, 1900, incorrectly treating 1900 as a leap year, even though e.g. 2100 is correctly treated as a non-leap year. Thus, a formula counting dates between (for example) February 1, 1900 and March 1, 1900 will return an incorrect result. The bug originated from Lotus 1-2-3, where it was deliberately implemented to save computer memory, and was also intentionally implemented in Excel for the purpose of bug compatibility. This legacy has later been carried over into Office Open XML file format. Date range Excel supports dates with years in the range 1900–9999, except that December 31, 1899, can be entered as 0 and is displayed as 0-jan-1900. Converting a fraction of a day into hours, minutes and days by treating it as a moment on the day January 1, 1900, does not work for a negative fraction. Conversion problems If text is entered that happens to be in a form that Excel interprets as a date, the text can be unintentionally changed to a standard date format. A similar problem occurs when a text happens to be in the form of a floating-point notation of a number. In these cases the original exact text cannot be recovered from the result. Formatting the cell as TEXT before entering ambiguous text prevents Excel from converting it. This issue has caused a well known problem in the analysis of DNA, for example in bioinformatics. As first reported in 2004, genetic scientists found that Excel automatically and incorrectly converts certain gene names into dates. A follow-up study in 2016 found many peer reviewed scientific journal papers had been affected and that "Of the selected journals, the proportion of published articles with Excel files containing gene lists that are affected by gene name errors is 19.6%." Excel parses the copied and pasted data and sometimes changes them depending on what it thinks they are. For example, MARCH1 (Membrane Associated Ring-CH-type finger 1) gets converted to the date March 1 (1-Mar) and SEPT2 (Septin 2) is converted into September 2 (2-Sep) etc. While some secondary news sources reported this as a fault with Excel, the original authors of the 2016 paper placed the blame with the researchers misusing Excel. In August 2020 the HUGO Gene Nomenclature Committee (HGNC) published new guidelines in the journal Nature regarding gene naming in order to avoid issues with "symbols that affect data handling and retrieval." So far 27 genes have been renamed, including changing MARCH1 to MARCHF1 and SEPT1 to SEPTIN1 in order to avoid accidental conversion of the gene names into dates. In October 2023, Microsoft fixed the long-standing issue. Errors with large strings The following functions return incorrect results when passed a string longer than 255 characters: incorrectly returns 16, meaning "Error value" , when called as a method of the VBA object (i.e., in VBA), incorrectly returns "false". Filenames Microsoft Excel will not open two documents with the same name and instead will display the following error: A document with the name '%s' is already open. You cannot open two documents with the same name, even if the documents are in different folders. To open the second document, either close the document that is currently open, or rename one of the documents. The reason is for calculation ambiguity with linked cells. If there is a cell , and there are two books named "Book1" open, there is no way to tell which one the user means. Versions Early history Microsoft originally marketed a spreadsheet program called Multiplan in 1982. Multiplan became very popular on CP/M systems, but on MS-DOS systems it lost popularity to Lotus 1-2-3. Microsoft released the first version of Excel for the Macintosh on September 30, 1985, and the first Windows version was 2.05 (to synchronize with the Macintosh version 2.2) on November 19, 1987. Lotus was slow to bring 1-2-3 to Windows and by the early 1990s, Excel had started to outsell 1-2-3 and helped Microsoft achieve its position as a leading PC software developer. This accomplishment solidified Microsoft as a valid competitor and showed its future in developing GUI software. Microsoft maintained its advantage with regular new releases, every two years or so. Microsoft Windows Excel 2.0 is the first version of Excel for the Intel platform. Versions prior to 2.0 were only available on the Apple Macintosh. Excel 2.0 (1987) The first Windows version was labeled "2" to correspond to the Mac version. It was announced on October 6, 1987, and released on November 19. This included a run-time version of Windows. BYTE in 1989 listed Excel for Windows as among the "Distinction" winners of the BYTE Awards. The magazine stated that the port of the "extraordinary" Macintosh version "shines", with a user interface as good as or better than the original. Excel 3.0 (1990) Included toolbars, drawing capabilities, outlining, add-in support, 3D charts, and many more new features. Excel 4.0 (1992) Included with Microsoft Office 3.0, this version introduced auto-fill. Also, an easter egg in Excel 4.0 reveals a hidden animation of a dancing set of numbers 1 through 3, representing Lotus 1–2–3, which is then crushed by an Excel logo. Excel 5.0 (1993) With version 5.0, included in Microsoft Office 4.2 and 4.3, Excel included Visual Basic for Applications (VBA), a programming language based on Visual Basic which adds the ability to automate tasks in Excel and to provide user-defined functions (UDF) for use in worksheets. VBA includes a fully featured integrated development environment (IDE). Macro recording can produce VBA code replicating user actions, thus allowing simple automation of regular tasks. VBA allows the creation of forms and in‑worksheet controls to communicate with the user. The language supports use (but not creation) of ActiveX (COM) DLL's; later versions add support for class modules allowing the use of basic object-oriented programming techniques. The automation functionality provided by VBA made Excel a target for macro viruses. This caused serious problems until antivirus products began to detect these viruses. Microsoft belatedly took steps to prevent the misuse by adding the ability to disable macros completely, to enable macros when opening a workbook or to trust all macros signed using a trusted certificate. Versions 5.0 to 9.0 of Excel contain various Easter eggs, including a "Hall of Tortured Souls", a Doom-like minigame, although since version 10 Microsoft has taken measures to eliminate such undocumented features from their products. 5.0 was released in a 16-bit x86 version for Windows 3.1 and later in a 32-bit version for NT 3.51 (x86/Alpha/PowerPC) Excel 95 (v7.0) Released in 1995 with Microsoft Office for Windows 95, this is the first major version after Excel 5.0, as there is no Excel 6.0 with all of the Office applications standardizing on the same major version number. Internal rewrite to 32-bits. Almost no external changes, but faster and more stable. Excel 95 contained a hidden Doom-like mini-game called "The Hall of Tortured Souls", a series of rooms featuring the names and faces of the developers as an Easter egg. Excel 97 (v8.0) Included in Office 97 (for x86 and Alpha). This was a major upgrade that introduced the paper clip office assistant and featured standard VBA used instead of internal Excel Basic. It introduced the now-removed Natural Language labels. This version of Excel includes a flight simulator as an Easter egg. Excel 2000 (v9.0) Included in Office 2000. This was a minor upgrade but introduced an upgrade to the clipboard where it can hold multiple objects at once. The Office Assistant, whose frequent unsolicited appearance in Excel 97 had annoyed many users, became less intrusive. A small 3-D game called "Dev Hunter" (inspired by Spy Hunter) was included as an Easter egg. Excel 2002 (v10.0) Included in Office XP. Very minor enhancements. Excel 2003 (v11.0) Included in Office 2003. Minor enhancements. Excel 2007 (v12.0) Included in Office 2007. This release was a major upgrade from the previous version. Similar to other updated Office products, Excel in 2007 used the new Ribbon menu system. This was different from what users were used to, and was met with mixed reactions. One study reported fairly good acceptance by users except for highly experienced users and users of word processing applications with a classical WIMP interface, but was less convinced in terms of efficiency and organization. However, an online survey reported that a majority of respondents had a negative opinion of the change, with advanced users being "somewhat more negative" than intermediate users, and users reporting a self-estimated reduction in productivity. Added functionality included Tables, and the SmartArt set of editable business diagrams. Also added was an improved management of named variables through the Name Manager, and much-improved flexibility in formatting graphs, which allow (x, y) coordinate labeling and lines of arbitrary weight. Several improvements to pivot tables were introduced. Also like other office products, the Office Open XML file formats were introduced, including .xlsm for a workbook with macros and .xlsx for a workbook without macros. Specifically, many of the size limitations of previous versions were greatly increased. To illustrate, the number of rows was now 1,048,576 (220) and the columns was 16,384 (214; the far-right column is XFD). This changes what is a valid A1 reference versus a named range. This version made more extensive use of multiple cores for the calculation of spreadsheets; however, VBA macros are not handled in parallel and XLL add‑ins were only executed in parallel if they were thread-safe and this was indicated at registration. Excel 2010 (v14.0) Included in Office 2010, this is the next major version after v12.0, as version number 13 was skipped. Minor enhancements and 64-bit support, including the following: Multi-threading recalculation (MTR) for commonly used functions Improved pivot tables More conditional formatting options Additional image editing capabilities In-cell charts called sparklines Ability to preview before pasting Office 2010 backstage feature for document-related tasks Ability to customize the Ribbon Many new formulas, most highly specialized to improve accuracy Excel 2013 (v15.0) Included in Office 2013, along with a lot of new tools included in this release: Improved Multi-threading and Memory Contention FlashFill Power View Power Pivot Timeline Slicer Windows App Inquire 50 new functions Excel 2016 (v16.0) Included in Office 2016, along with a lot of new tools included in this release: Power Query integration Read-only mode for Excel Keyboard access for Pivot Tables and Slicers in Excel New Chart Types Quick data linking in Visio Excel forecasting functions Support for multiselection of Slicer items using touch Time grouping and Pivot Chart Drill Down Excel data cards Excel 2019, Excel 2021, Office 365 and subsequent (v16.0) Microsoft no longer releases Office or Excel in discrete versions. Instead, features are introduced automatically over time using Windows Update. The version number remains 16.0. Thereafter only the approximate dates when features appear can now be given. Dynamic Arrays. These are essentially Array Formulas but they "Spill" automatically into neighboring cells and do not need the ctrl-shift-enter to create them. Further, dynamic arrays are the default format, with new "@" and "#" operators to provide compatibility with previous versions. This is perhaps the biggest structural change since 2007, and is in response to a similar feature in Google Sheets. Dynamic arrays started appearing in pre-releases about 2018, and as of March 2020 are available in published versions of Office 365 provided a user selected "Office Insiders". Apple Macintosh 1985 Excel 1.0 1988 Excel 1.5 1989 Excel 2.2 1990 Excel 3.0 1992 Excel 4.0 1993 Excel 5.0 (part of Office 4.x—Final Motorola 680x0 version and first PowerPC version) 1998 Excel 8.0 (part of Office 98) 2000 Excel 9.0 (part of Office 2001) 2001 Excel 10.0 (part of Office v. X) 2004 Excel 11.0 (part of Office 2004) 2008 Excel 12.0 (part of Office 2008) 2010 Excel 14.0 (part of Office 2011) 2015 Excel 15.0 (part of Office 2016—Office 2016 for Mac brings the Mac version much closer to parity with its Windows cousin, harmonizing many of the reporting and high-level developer functions, while bringing the ribbon and styling into line with its PC counterpart.) OS/2 1989 Excel 2.2 1990 Excel 2.3 1991 Excel 3.0 Summary Impact Excel offers many user interface tweaks over the earliest electronic spreadsheets; however, the essence remains the same as in the original spreadsheet software, VisiCalc: the program displays cells organized in rows and columns, and each cell may contain data or a formula, with relative or absolute references to other cells. Excel 2.0 for Windows, which was modeled after its Mac GUI-based counterpart, indirectly expanded the installed base of the then-nascent Windows environment. Excel 2.0 was released a month before Windows 2.0, and the installed base of Windows was so low at that point in 1987 that Microsoft had to bundle a runtime version of Windows 1.0 with Excel 2.0. Unlike Microsoft Word, there never was a DOS version of Excel. Excel became the first spreadsheet to allow the user to define the appearance of spreadsheets (fonts, character attributes, and cell appearance). It also introduced intelligent cell re-computation, where only cells dependent on the cell being modified are updated (previous spreadsheet programs recomputed everything all the time or waited for a specific user command). Excel introduced auto-fill, the ability to drag and expand the selection box to automatically copy a cell or row contents to adjacent cells or rows, adjusting the copies intelligently by automatically incrementing cell references or contents. Excel also introduced extensive graphing capabilities. Security Because Excel is widely used, it has been attacked by hackers. While Excel is not directly exposed to the Internet, if an attacker can get a victim to open a file in Excel, and there is an appropriate security bug in Excel, then the attacker can gain control of the victim's computer. UK's GCHQ has a tool named TORNADO ALLEY with this purpose. Games Besides the easter eggs, numerous games have been created or recreated in Excel, such as Tetris, 2048, Scrabble, Yahtzee, Angry Birds, Pac-Man, Civilization, Monopoly, Battleship, Blackjack, Space Invaders, and others. In 2020, Excel became an esport with the advent of the Financial Modeling World Cup.
Technology
Office and data management
null
20287
https://en.wikipedia.org/wiki/Microsoft%20Word
Microsoft Word
Microsoft Word is a word processing program developed by Microsoft. It was first released on October 25, 1983, under the name Multi-Tool Word for Xenix systems. Subsequent versions were later written for several other platforms including: IBM PCs running DOS (1983), Apple Macintosh running the Classic Mac OS (1985), AT&T UNIX PC (1985), Atari ST (1988), OS/2 (1989), Microsoft Windows (1989), SCO Unix (1990), Handheld PC (1996), Pocket PC (2000), macOS (2001), Web browsers (2010), iOS (2014), and Android (2015). Microsoft Word has been the de facto standard word processing software since the 1990s when it eclipsed WordPerfect. Commercial versions of Word are licensed as a standalone product or as a component of Microsoft Office, which can be purchased with a perpetual license, or as part of the Microsoft 365 suite as a subscription. History In 1981, Microsoft hired Charles Simonyi, the primary developer of Bravo, the first GUI word processor, which was developed at Xerox PARC. Simonyi started work on a word processor called Multi-Tool Word and soon hired Richard Brodie, a former Xerox intern, who became the primary software engineer. Microsoft announced Multi-Tool Word for Xenix and MS-DOS in 1983. Its name was soon simplified to Microsoft Word. Free demonstration copies of the application were bundled with the November 1983 issue of PC World, making it the first to be distributed on-disk with a magazine. That year Microsoft demonstrated Word running on Windows. Unlike most MS-DOS programs at the time, Microsoft Word was designed to be used with a mouse. Advertisements depicted the Microsoft Mouse and described Word as a WYSIWYG, windowed word processor with the ability to undo and display bold, italic, and underlined text, although it could not render fonts. It was not initially popular, since its user interface was different from the leading word processor at the time, WordStar. However, Microsoft steadily improved the product, releasing versions 2.0 through 5.0 over the next six years. In 1985, Microsoft ported Word to the classic Mac OS (known as Macintosh System Software at the time). This was made easier by Word for DOS having been designed for use with high-resolution displays and laser printers, even though none were yet available to the general public. It was also notable for its very fast cut-and-paste function and unlimited number of undo operations, which are due to its usage of the piece table data structure. Following the precedents of LisaWrite and MacWrite, Word for Mac OS added true WYSIWYG features. It fulfilled a need for a word processor that was more capable than MacWrite. After its release, Word for Mac OS's sales were higher than its MS-DOS counterpart for at least four years. The second release of Word for Mac OS, shipped in 1987, was named Word 3.0 to synchronize its version number with Word for DOS; this was Microsoft's first attempt to synchronize version numbers across platforms. Word 3.0 included numerous internal enhancements and new features, including the first implementation of the Rich Text Format (RTF) specification, but was plagued with bugs. Within a few months, Word 3.0 was superseded by a more stable Word 3.01, which was mailed free to all registered users of 3.0. After MacWrite Pro was discontinued in the mid-1990s, Word for Mac OS never had any serious rivals. Word 5.1 for Mac OS, released in 1992, was a very popular word processor owing to its elegance, relative ease of use, and feature set. Many users say it is the best version of Word for Mac OS ever created. In 1986, an agreement between Atari and Microsoft brought Word to the Atari ST under the name Microsoft Write. The Atari ST version was a port of Word 1.05 for the Mac OS and was never updated. The first version of Word for Windows was released in 1989. With the release of Windows 3.0 the following year, sales began to pick up and Microsoft soon became the market leader for word processors for IBM PC-compatible computers. In 1991, Microsoft capitalized on Word for Windows' increasing popularity by releasing a version of Word for DOS, version 5.5, that replaced its unique user interface with an interface similar to a Windows application. When Microsoft became aware of the Year 2000 problem, it made Microsoft Word 5.5 for DOS available for free downloads. , it is still available for download from Microsoft's website. In 1991, Microsoft embarked on a project code-named Pyramid to completely rewrite Microsoft Word from the ground up. Both the Windows and Mac OS versions would start from the same code base. It was abandoned when it was determined that it would take the development team too long to rewrite and then catch up with all the new capabilities that could have been added at the same time without a rewrite. Instead, the next versions of Word for Windows and Mac OS, dubbed version 6.0, both started from the code base of Word for Windows 2.0. With the release of Word 6.0 in 1993, Microsoft again attempted to synchronize the version numbers and coordinate product naming across platforms, this time across DOS, Mac OS, and Windows (this was the last version of Word for DOS). It introduced AutoCorrect, which automatically fixed certain typing errors, and AutoFormat, which could reformat many parts of a document at once. While the Windows version received favorable reviews (e.g., from InfoWorld), the Mac OS version was widely derided. Many accused it of being slow, clumsy, and memory intensive, and its user interface differed significantly from Word 5.1. In response to user requests, Microsoft offered Word 5 again, after it had been discontinued. Subsequent versions of Word for macOS are no longer direct ports of Word for Windows, instead featuring a mixture of ported code and native code. File formats Filename extensions Microsoft Word's native file formats are denoted either by a .doc or .docx filename extension. Although the .doc extension has been used in many different versions of Word, it actually encompasses four distinct file formats: Word for DOS Word for Windows 1 and 2; Word 3 and 4 for Mac OS Word 6 and Word 95 for Windows; Word 6 for Mac OS Word 97 and later for Windows; Word 98 and later for Mac OS (The classic Mac OS of the era did not use filename extensions.) The newer .docx extension signifies the Office Open XML international standard for Office documents and is used by default by Word 2007 and later for Windows as well as Word 2008 and later for macOS. Binary formats (Word 97–2007) During the late 1990s and early 2000s, the default Word document format (.DOC) became a de facto standard of document file formats for Microsoft Office users. There are different versions of "Word Document Format" used by default in Word 97–2007. Each binary word file is a Compound File, a hierarchical file system within a file. According to Joel Spolsky, Word Binary File Format is extremely complex mainly because its developers had to accommodate an overwhelming number of features and prioritize performance over anything else. As with all OLE Compound Files, Word Binary Format consists of "storages", which are analogous to computer folders, and "streams", which are similar to computer files. Each storage may contain streams or other storage. Each Word Binary File must contain a stream called the "WordDocument" stream and this stream must start with a File Information Block (FIB). FIB serves as the first point of reference for locating everything else, such as where the text in a Word document starts, ends, what version of Word created the document and other attributes. Word 2007 and later continue to support the DOC file format, although it is no longer the default. XML Document (Word 2003) The .docx XML format introduced in Word 2003 was a simple, XML-based format called WordProcessingML or WordML. The Microsoft Office XML formats are XML-based document formats (or XML schemas) introduced in versions of Microsoft Office prior to Office 2007. Microsoft Office XP introduced a new XML format for storing Excel spreadsheets and Office 2003 added an XML-based format for Word documents. These formats were succeeded by Office Open XML (ECMA-376) in Microsoft Office 2007. Cross-version compatibility Opening a Word Document file in a version of Word other than the one with which it was created can cause an incorrect display of the document. The document formats of the various versions change in subtle and not-so-subtle ways (such as changing the font or the handling of more complex tasks like footnotes). Formatting created in newer versions does not always survive when viewed in older versions of the program, nearly always because that capability does not exist in the previous version. Rich Text Format (RTF), an early effort to create a format for interchanging formatted text between applications, is an optional format for Word that retains most formatting and all content of the original document. Third-party formats Plugins permitting the Windows versions of Word to read and write formats it does not natively support, such as international standard OpenDocument format (ODF) (ISO/IEC 26300:2006), are available. Up until the release of Service Pack 2 (SP2) for Office 2007, Word did not natively support reading or writing ODF documents without a plugin, namely the SUN ODF Plugin or the OpenXML/ODF Translator. With SP2 installed, ODF format 1.1 documents can be read and saved like any other supported format in addition to those already available in Word 2007. The implementation faces substantial criticism, and the ODF Alliance and others have claimed that the third-party plugins provide better support. Microsoft later declared that the ODF support has some limitations. In October 2005, one year before the Microsoft Office 2007 suite was released, Microsoft declared that there was insufficient demand from Microsoft customers for the international standard OpenDocument format support and that therefore it would not be included in Microsoft Office 2007. This statement was repeated in the following months. As an answer, on October 20, 2005, an online petition was created to demand ODF support from Microsoft. In May 2006, the ODF plugin for Microsoft Office was released by the OpenDocument Foundation. Microsoft declared that it had no relationship with the developers of the plugin. In July 2006, Microsoft announced the creation of the Open XML Translator project – tools to build a technical bridge between the Microsoft Office Open XML Formats and the OpenDocument Format (ODF). This work was started in response to government requests for interoperability with ODF. The goal of the project was not to add ODF support to Microsoft Office, but only to create a plugin and an external toolset. In February 2007, this project released a first version of the ODF plugin for Microsoft Word. In February 2007, Sun released an initial version of its ODF plugin for Microsoft Office. Version 1.0 was released in July 2007. Microsoft Word 2007 (Service Pack 1) supports (for output only) PDF and XPS formats, but only after manual installation of the Microsoft "Save as PDF or XPS" add-on. On later releases, this was offered by default. Features Among its features, Word includes a built-in spell checker, a thesaurus, a dictionary, and utilities for manipulating and editing text. It supports creating tables. Depending on the version, it can perform simple and complex calculations, and supports formatting formulas and equations. The following are some aspects of its feature set. Templates Several later versions of Word include the ability for users to create their own formatting templates, allowing them to define a file in which: the title, heading, paragraph, and other element designs differ from the standard Word templates. Users can find how to do this under the Help section located near the top right corner (Word 2013 on Windows 8). For example, Normal.dotm is the master template from which all Word documents are created. It determines the margin defaults as well as the layout of the text and font defaults. Although Normal.dotm is already set with certain defaults, the user can change it to new defaults. This will change other documents which were created using the template. It was previously Normal.dot. Image formats Word can import and display images in common bitmap formats such as JPG and GIF. It can also be used to create and display simple line art. Microsoft Word added support for the common SVG vector image format in 2017 for Office 365 ProPlus subscribers and this functionality was also included in the Office 2019 release. WordArt WordArt enables drawing text in a Microsoft Word document such as a title, watermark, or other text, with graphical effects such as skewing, shadowing, rotating, stretching in a variety of shapes and colors, and even including three-dimensional effects. Users can apply formatting effects such as shadow, bevel, glow, and reflection to their document text as easily as applying bold or underline. Users can also spell-check text that uses visual effects and add text effects to paragraph styles. Macros A macro is a rule of pattern that specifies how a certain input sequence (often a sequence of characters) should be mapped to an output sequence according to a defined process. Frequently used or repetitive sequences of keystrokes and mouse movements can be automated. Like other Microsoft Office documents, Word files can include advanced macros and even embedded programs. The language was originally WordBasic, but changed to Visual Basic for Applications as of Word 97. This extensive functionality can also be used to run and propagate viruses in documents. The tendency for people to exchange Word documents via email, USB flash drives, and floppy disks made this an especially attractive vector in 1999. A prominent example was the Melissa virus, but countless others have existed. These macro viruses were the only known cross-platform threats between Windows and Macintosh computers and they were the only infection vectors to affect any macOS system up until the advent of video codec trojans in 2007. Microsoft released patches for Word X and Word 2004 that effectively eliminated the macro problem on the Mac by 2006. Word's macro security setting, which regulates when macros may execute, can be adjusted by the user, but in the most recent versions of Word, it is set to HIGH by default, generally reducing the risk from macro-based viruses, which have become uncommon. Layout issues Before Word 2010 (Word 14) for Windows, the program was unable to correctly handle ligatures defined in OpenType fonts. Those ligature glyphs with Unicode codepoints may be inserted manually, but are not recognized by Word for what they are, breaking spell checking, while custom ligatures present in the font are not accessible at all. Since Word 2010, the program now has advanced typesetting features which can be enabled, OpenType ligatures, kerning and hyphenation (previous versions already had the latter two features). Other layout deficiencies of Word include the inability to set crop marks or thin spaces. Various third-party workaround utilities have been developed. In Word 2004 for Mac OS X, support of complex scripts was inferior even to Word 97 and Word 2004 did not support Apple Advanced Typography features like ligatures or glyph variants. Issues with technical documents Microsoft Word is only partially suitable for some kinds of technical writing, specifically, that which requires mathematical equations, figure placement, table placement and cross-references to any of these items. The usual workaround for equations is to use a third-party equation typesetter. Figures and tables must be placed manually; there is an anchor mechanism but it is not designed for fully automatic figure placement and editing text after placing figures and tables often requires re-placing those items by moving the anchor point and even then the placement options are limited. This problem is deeply baked into Word's structure since 1985 as it does not know where page breaks will occur until the document is printed. Bullets and numbering Microsoft Word supports bullet lists and numbered lists. It also features a numbering system that helps add correct numbers to pages, chapters, headers, footnotes, and entries of tables of content; these numbers automatically change to correct ones as new items are added or existing items are deleted. Bullets and numbering can be applied directly to paragraphs and converted to lists. Word 97 through 2003, however, had problems adding correct numbers to numbered lists. In particular, a second irrelevant numbered list might have not started with number one but instead resumed numbering after the last numbered list. Although Word 97 supported a hidden marker that said the list numbering must restart afterward, the command to insert this marker (Restart Numbering command) was only added in Word 2003. However, if one were to cut the first item of the listed and paste it as another item (e.g. fifth), then the restart marker would have moved with it and the list would have restarted in the middle instead of at the top. Word continues to default to non-Unicode characters and non-hierarchical bulleting, despite user preference for PowerPoint-style symbol hierarchies (e.g., filled circle/emdash/filled square/endash/emptied circle) and universal compatibility. AutoSummarize Available in certain versions of Word (e.g., Word 2007), AutoSummarize highlights passages or phrases that it considers valuable and can be a quick way of generating a crude abstract or an executive summary. The amount of text to be retained can be specified by the user as a percentage of the current amount of text. According to Ron Fein of the Word 97 team, AutoSummarize cuts wordy copy to the bone by counting words and ranking sentences. First, AutoSummarize identifies the most common words in the document (barring "a" and "the" and the like) and assigns a "score" to each word – the more frequently a word is used, the higher the score. Then, it "averages" each sentence by adding the scores of its words and dividing the sum by the number of words in the sentence – the higher the average, the higher the rank of the sentence. "It's like the ratio of wheat to chaff," explains Fein. AutoSummarize was removed from Microsoft Word for Mac OS X 2011, although it was present in Word for Mac 2008. AutoSummarize was removed from the Office 2010 release version (14) as well. Spike Spike is a specialized cut command in Microsoft Word. It is named after an implement in restaurants on which receipts are impaled, and similarly sequentially stores data to be pasted and adds them together to the document when the second function step, or paste, is performed. Please note that spiking (CONTROL–F3) performs a cut function, which can be immediately undone to simulate a "copy" command, while the pasting function (SHIFT–CONTROL–F3) will also clear the data from the spike, although this can be avoided by using alternatives to the three-key shortcut. Hidden text Word supports marking selected text as "hidden". Hidden text is text that is stored in the document but is not displayed. For example, pages containing large amounts of markup language text can be made visually more readable during the editing process. Password protection Three password types can be set in Microsoft Word: Password to open a document Password to modify a document Password restricting formatting and editing The second and third password types were developed by Microsoft for convenient shared use of documents rather than for their protection. There is no encryption of documents that are protected by such passwords and the Microsoft Office protection system saves a hash sum of a password in a document's header where it can be easily accessed and removed by the specialized software. Password to open a document offers much tougher protection that had been steadily enhanced in the subsequent editions of Microsoft Office. Word 95 and all the preceding editions had the weakest protection that utilized a conversion of a password to a 16-bit key. Key length in Word 97 and 2000 was strengthened up to 40 bit. However, modern cracking software allows removing such a password very quickly – a persistent cracking process takes one week at most. Use of rainbow tables reduces password removal time to several seconds. Some password recovery software can not only remove a password but also find an actual password that was used by a user to encrypt the document using the brute-force attack approach. Statistically, the possibility of recovering the password depends on the password strength. Word's 2003/XP version default protection remained the same but an option that allowed advanced users to choose a Cryptographic Service Provider was added. If a strong CSP is chosen, guaranteed document decryption becomes unavailable and, therefore, a password can't be removed from the document. Nonetheless, a password can be fairly quickly picked with a brute-force attack, because its speed is still high regardless of the CSP selected. Moreover, since the CSPs are not active by default, their use is limited to advanced users only. Word 2007 offers significantly more secure document protection which utilizes the modern Advanced Encryption Standard (AES) that converts a password to a 128-bit key using a SHA-1 hash function 50,000 times. It makes password removal impossible (as of today, no computer that can pick the key in a reasonable amount of time exists) and drastically slows the brute-force attack speed down to several hundreds of passwords per second. Word's 2010 protection algorithm was not changed apart from the increasing number of SHA-1 conversions up to 100,000 times and consequently, the brute-force attack speed decreased two times more. Versions and platforms Word for Windows Word for Windows is available stand-alone or as part of the Microsoft Office suite. Word contains rudimentary desktop publishing capabilities and is the most widely used word processing program on the market. Word files are commonly used as the format for sending text documents via e-mail because almost every user with a computer can read a Word document by using the Word application, a Word viewer or a word processor that imports the Word format (see Microsoft Word Viewer). Word 6 for Windows NT was the first 32-bit version of the product, released with Microsoft Office for Windows NT around the same time as Windows 95. It was a straightforward port of Word 6.0. Starting with Word 95, each release of Word was named after the year of its release, instead of its version number. Word 2007 introduced a redesigned user interface that emphasized the most common controls, dividing them into tabs, and adding specific options depending on the context, such as selecting an image or editing a table. This user interface, called Ribbon, was included in Excel, PowerPoint and Access 2007, and would be later introduced to other Office applications with Office 2010 and Windows applications such as Paint and WordPad with Windows 7, respectively. The redesigned interface also includes a toolbar that appears when selecting text, with options for formatting included. Word 2007 also included the option to save documents as Adobe Acrobat or XPS files, and upload Word documents like blog posts on services such as WordPress. Word 2010 allows the customization of the Ribbon, adds a Backstage view for file management, has improved document navigation, allows creation and embedding of screenshots, and integrates with online services such as Microsoft OneDrive. Word 2019 added a dictation function. Word 2021 added co-authoring, a visual refresh on the start experience and tabs, automatic cloud saving, dark mode, line focus, an updated draw tab, and support for ODF 1.3. Word for Mac The Mac was introduced on January 24, 1984, and Microsoft introduced Word 1.0 for Mac a year later, on January 18, 1985. The DOS, Mac, and Windows versions are quite different from each other. Only the Mac version was WYSIWYG and used a graphical user interface, far ahead of the other platforms. Each platform restarted its version numbering at "1.0". There was no version 2 on the Mac, but version 3 came out on January 31, 1987, as described above. Word 4.0 came out on November 6, 1990, and added automatic linking with Excel, the ability to flow text around graphics, and a WYSIWYG page view editing mode. Word 5.1 for Mac, released in 1992 ran on the original 68000 CPU and was the last to be specifically designed as a Macintosh application. The later Word 6 was a Windows port and poorly received. Word 5.1 continued to run well until the last classic Mac OS. Many people continue to run Word 5.1 to this day under an emulated Mac classic system for some of its excellent features, such as document generation and renumbering, or to access their old files. In 1997, Microsoft formed the Macintosh Business Unit as an independent group within Microsoft focused on writing software for the classic Mac OS. Its first version of Word, Word 98, was released with Office 98 Macintosh Edition. Document compatibility reached parity with Word 97, and it included features from Word 97 for Windows, including spell and grammar checking with squiggles. Users could choose the menus and keyboard shortcuts to be similar to either Word 97 for Windows or Word 5 for Mac. Word 2001, released in 2000, added a few new features, including the Office Clipboard, which allowed users to copy and paste multiple items. It was the last version to run on the classic Mac OS and, on Mac OS X, it could only run within the Classic Environment. Word X, released in 2001, was the first version to run natively on, and to require, Mac OS X, and introduced non-contiguous text selection. Word 2004 was released in May 2004. It included a new Notebook Layout view for taking notes either by typing or by voice. Other features, such as tracking changes, were made more similar with Office for Windows. Word 2008, released on January 15, 2008, included a Ribbon-like feature, called the Elements Gallery, that can be used to select page layouts and insert custom diagrams and images. It also included a new view focused on publishing layout, integrated bibliography management, and native support for the new Office Open XML format. It was the first version to run natively on Intel-based Macs. Word 2011, released in October 2010, replaced the Elements Gallery in favor of a Ribbon user interface that is much more similar to Office for Windows, and includes a full-screen mode that allows users to focus on reading and writing documents, and support for Office Web Apps. Word 2021 added real-time co-authoring, automatic cloud saving, dark mode, immersive reader enhancements, line focus, a visual refresh, the ability to save pictures in SVG format, and a new Sketched style outline. Word 2024, released on September 16, 2024, included Word session recovery, support for ODF 1.4, new theme and color palette and ability for easier collaboration. Even though collaboration features were also available in Microsoft Word 2021 as part of post release update, they were not available in Word LTSC 2021 or Word LTSC 2024. Write for Atari ST Microsoft Write for the Atari ST is the Atari version of Microsoft Word 1.05 released for the Apple Macintosh while sharing the same name as the Microsoft Write program included in Windows during the 80s and early 90s. While the program was announced in 1986, various delays caused the program to arrive in 1988. Microsoft Write for Atari ST and Microsoft Word for Windows would both make their debut at the 1988 COMDEX in Atlanta, Georgia alongside their respective booths. Like the Mac version, the Atari version features WYSIWYG form (via GDOS) and used a graphical user interface (via GEM). Microsoft Write was one of the first Atari word processors that utilizes the GDOS (Graphics Device Operating System) part of GEM (Graphics Environment Manager) allowing the word processor to display and print graphic fonts & styles making it a multifont word processor for the Atari ST (a 2nd disk drive was required to run both Microsoft Write and GDOS). Microsoft Write was packaged with GDOS 1.1 and the drivers for the Atari XMM804 dot matrix printer along with 3rd party printers like Epson FX-80 and Star Micronics NB-15 on 4 diskettes (3½ inch format). Accompanying the retail packaging was a 206-page slip-cased user's manual that was divided into 3 sections: Learning Write, Using Write and Write Reference. In addition, Microsoft Write also featured a "Help Screen" tool to help a user explore the advanced features of the word processor that earned high praise for its form and presentation. Write for Macintosh In October 1987, Microsoft released Microsoft Write for Macintosh. Write is a version of Microsoft Word with limited features that Microsoft hoped would replace aging MacWrite in the Macintosh word processor market. Write was priced well below Word, though at the time MacWrite was included with new Macintoshes. Write is best described as Word locked in "Short Menus" mode, and as such it used the same file format so that users could exchange files with absolutely no conversion necessary. Write did not sell well and was discontinued before the System 7 era. Microsoft Write was part of a short-lived trend for "lightweight" Macintosh word processors initiated by the introduction of the Macintosh Portable and early PowerBook systems. Others included LetterPerfect and Nisus Compact. Word on mobile platforms The first mobile versions of Word were released with Windows CE in 1996 on Handheld PCs and later also on Pocket PCs. The modern Word Mobile supports basic formatting, such as bolding, changing font size, and changing colors (from red, yellow, or green). It can add comments, but can't edit documents with tracked changes. It can't open password-protected documents; change the typeface, text alignment, or style (normal, heading 1); insert responsive checkboxes; insert pictures; or undo. Word Mobile is neither able to display nor insert footnotes, endnotes, page footers, page breaks, certain indentation of lists, and certain fonts while working on a document, but retains them if the original document has them. Word Mobile can insert lists, but doesn't allow to set custom bullet symbols and customize list numbering. In addition to the features of the 2013 version, the 2007 version on Windows Mobile also has the ability to save documents in the Rich Text Format and open legacy PSW (Pocket Word). Furthermore, it includes a spell checker, word count tool, and a "Find and Replace" command. In 2015, Word Mobile became available for Windows 10 and Windows 10 Mobile on Windows Store. Support for the Windows 10 Mobile version ended on January 12, 2021. Word for iOS was released on March 27, 2014 and for Android was released on January 29, 2015. Word for the web Word for the web is a free lightweight version of Microsoft Word available as part of Office on the web, which also includes web versions of Microsoft Excel and Microsoft PowerPoint. Word for the web lacks some Ribbon tabs, such as Design and Mailings. Mailings allows users to print envelopes and labels and manage mail merge printing of Word documents. Word for the web is not able to edit certain objects, such as: equations, shapes, text boxes or drawings, but a placeholder may be present in the document. Certain advanced features like table sorting or columns will not be displayed but are preserved as they were in the document. Other views available in the Word desktop app (Outline, Draft, Web Layout, and Full-Screen Reading) are not available, nor are side-by-side viewing, split windows, and the ruler. Reception Initial releases of Word were met with criticism. Byte in 1984 criticized the documentation for Word 1.1 and 2.0 for DOS, calling it "a complete farce". It called the software "clever, put together well and performs some extraordinary feats", but concluded that "especially when operated with the mouse, has many more limitations than benefits... extremely frustrating to learn and operate efficiently". PC Magazine review was very mixed, stating: "I've run into weird word processors before, but this is the first time one's nearly knocked me down for the count" but acknowledging that Word's innovations were the first that caused the reviewer to consider abandoning WordStar. While the review cited an excellent WYSIWYG display, sophisticated print formatting, windows, and footnoting as merits, it criticized many small flaws, very slow performance, and "documentation produced by Madame Sadie's Pain Palace". It concluded that Word was "two releases away from potential greatness". Compute!'s Apple Applications in 1987 stated that "despite a certain awkwardness", Word 3.01 "will likely become the major Macintosh word processor" with "far too many features to list here". While criticizing the lack of true WYSIWYG, the magazine concluded that "Word is marvelous. It's like a Mozart or Edison, whose occasional gaucherie we excuse because of his great gifts". Compute! in 1989 stated that Word 5.0's integration of text and graphics made it "a solid engine for basic desktop publishing". The magazine approved of improvements to text mode, described the $75 price for upgrading from an earlier version as "the deal of the decade" and concluded that "as a high-octane word processor, Word is worth a look". During the first quarter of 1996, Microsoft Word accounted for 80% of the worldwide word processing market. In 2013, Microsoft added Word to the new Office 365 product, where Microsoft has combined their most popular software, which is a cloud based computing software that is subscription-based to compete with Google Docs. Release history
Technology
Office and data management
null
20288
https://en.wikipedia.org/wiki/Microsoft%20Office
Microsoft Office
Microsoft Office, MS Office, or simply Office, is an office suite and family of client software, server software, and services developed by Microsoft. The first version of the Office suite, announced by Bill Gates on August 1, 1988 at COMDEX, contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint — all three of which remain core products in Office — and over time Office applications have grown substantially closer with shared features such as a common spell checker, Object Linking and Embedding data integration and Visual Basic for Applications scripting language. Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand. The suite currently includes a word processor (Word), a spreadsheet program (Excel), a presentation program (PowerPoint), a notetaking program (OneNote), an email client (Outlook) and a file-hosting service client (OneDrive). The Windows version includes a database management system (Access). Office is produced in several versions targeted towards different end-users and computing environments. The original, and most widely used version, is the desktop version, available for PCs running the Windows and macOS operating systems, and sold at retail or under volume licensing. Microsoft also maintains mobile apps for Android and iOS, as well as Office on the web, a version of the software that runs within a web browser, which are offered freely. Since Office 2013, Microsoft has promoted Office 365 as the primary means of obtaining Microsoft Office: it allows the use of the software and other services on a subscription business model, and users receive feature updates to the software for the lifetime of the subscription, including new features and cloud computing integration that are not necessarily included in the "on-premises" releases of Office sold under conventional license terms. In 2017, revenue from Office 365 overtook conventional license sales. Microsoft also rebranded most of their standard Office 365 editions as "Microsoft 365" to reflect their inclusion of features and services beyond the core Microsoft Office suite. Although Microsoft announced that it was to phase out the Microsoft Office brand in favor of Microsoft 365 by 2023, with the name continuing only for legacy product offerings, later that year it reversed this decision and announced Office 2024, which they released in September 2024. Components Core apps and services Microsoft Word is a word processor included in Microsoft Office and some editions of the now-discontinued Microsoft Works. The first version of Word, released in the autumn of 1983, was for the MS-DOS operating system and introduced the computer mouse to more users. Word 1.0 could be purchased with a bundled mouse, though none was required. Following the precedents of LisaWrite and MacWrite, Word for Macintosh attempted to add closer WYSIWYG features into its package. Word for Mac was released in 1985. Word for Mac was the first graphical version of Microsoft Word. Initially, it implemented the proprietary .doc format as its primary format. Word 2007, however, deprecated this format in favor of Office Open XML, which was later standardized by Ecma International as an open format. Support for Portable Document Format (PDF) and OpenDocument (ODF) was first introduced in Word for Windows with Service Pack 2 for Word 2007. Microsoft Excel is a spreadsheet editor that originally competed with the dominant Lotus 1-2-3 and eventually outsold it. Microsoft released the first version of Excel for the Mac OS in 1985 and the first Windows version (numbered 2.05 to line up with the Mac) in November 1987. Microsoft PowerPoint is a presentation program used to create slideshows composed of text, graphics, and other objects, which can be displayed on-screen and shown by the presenter or printed out on transparencies or slides. Microsoft OneNote is a notetaking program that gathers handwritten or typed notes, drawings, screen clippings and audio commentaries.
Technology
Office and data management
null
20306
https://en.wikipedia.org/wiki/Mole%20fraction
Mole fraction
In chemistry, the mole fraction or molar fraction, also called mole proportion or molar proportion, is a quantity defined as the ratio between the amount of a constituent substance, ni (expressed in unit of moles, symbol mol), and the total amount of all constituents in a mixture, ntot (also expressed in moles): It is denoted xi (lowercase Roman letter x), sometimes (lowercase Greek letter chi). (For mixtures of gases, the letter y is recommended.) It is a dimensionless quantity with dimension of and dimensionless unit of moles per mole (mol/mol or molmol−1) or simply 1; metric prefixes may also be used (e.g., nmol/mol for 10−9). When expressed in percent, it is known as the mole percent or molar percentage (unit symbol %, sometimes "mol%", equivalent to cmol/mol for 10−2). The mole fraction is called amount fraction by the International Union of Pure and Applied Chemistry (IUPAC) and amount-of-substance fraction by the U.S. National Institute of Standards and Technology (NIST). This nomenclature is part of the International System of Quantities (ISQ), as standardized in ISO 80000-9, which deprecates "mole fraction" based on the unacceptability of mixing information with units when expressing the values of quantities. The sum of all the mole fractions in a mixture is equal to 1: Mole fraction is numerically identical to the number fraction, which is defined as the number of particles (molecules) of a constituent Ni divided by the total number of all molecules Ntot. Whereas mole fraction is a ratio of amounts to amounts (in units of moles per moles), molar concentration is a quotient of amount to volume (in units of moles per litre). Other ways of expressing the composition of a mixture as a dimensionless quantity are mass fraction and volume fraction. Properties Mole fraction is used very frequently in the construction of phase diagrams. It has a number of advantages: it is not temperature dependent (as is molar concentration) and does not require knowledge of the densities of the phase(s) involved a mixture of known mole fraction can be prepared by weighing off the appropriate masses of the constituents the measure is symmetric: in the mole fractions x = 0.1 and x = 0.9, the roles of 'solvent' and 'solute' are reversed. In a mixture of ideal gases, the mole fraction can be expressed as the ratio of partial pressure to total pressure of the mixture In a ternary mixture one can express mole fractions of a component as functions of other components mole fraction and binary mole ratios: Differential quotients can be formed at constant ratios like those above: or The ratios X, Y, and Z of mole fractions can be written for ternary and multicomponent systems: These can be used for solving PDEs like: or This equality can be rearranged to have differential quotient of mole amounts or fractions on one side. or Mole amounts can be eliminated by forming ratios: Thus the ratio of chemical potentials becomes: Similarly the ratio for the multicomponents system becomes Related quantities Mass fraction The mass fraction wi can be calculated using the formula where Mi is the molar mass of the component i and M̄ is the average molar mass of the mixture. Molar mixing ratio The mixing of two pure components can be expressed introducing the amount or molar mixing ratio of them . Then the mole fractions of the components will be: The amount ratio equals the ratio of mole fractions of components: due to division of both numerator and denominator by the sum of molar amounts of components. This property has consequences for representations of phase diagrams using, for instance, ternary plots. Mixing binary mixtures with a common component to form ternary mixtures Mixing binary mixtures with a common component gives a ternary mixture with certain mixing ratios between the three components. These mixing ratios from the ternary and the corresponding mole fractions of the ternary mixture x1(123), x2(123), x3(123) can be expressed as a function of several mixing ratios involved, the mixing ratios between the components of the binary mixtures and the mixing ratio of the binary mixtures to form the ternary one. Mole percentage Multiplying mole fraction by 100 gives the mole percentage, also referred as amount/amount percent [abbreviated as (n/n)% or mol %]. Mass concentration The conversion to and from mass concentration ρi is given by: where M̄ is the average molar mass of the mixture. Molar concentration The conversion to molar concentration ci is given by: where M̄ is the average molar mass of the solution, c is the total molar concentration and ρ is the density of the solution. Mass and molar mass The mole fraction can be calculated from the masses mi and molar masses Mi of the components: Spatial variation and gradient In a spatially non-uniform mixture, the mole fraction gradient triggers the phenomenon of diffusion.
Physical sciences
Ratio
Basics and measurement
20321
https://en.wikipedia.org/wiki/Mackinac%20Bridge
Mackinac Bridge
The Mackinac Bridge ( ; also referred to as the Mighty Mac or Big Mac) is a suspension bridge that connects the Upper and Lower peninsulas of the U.S. state of Michigan. It spans the Straits of Mackinac, a body of water connecting Lake Michigan and Lake Huron, two of the Great Lakes. Opened in 1957, the bridge is the world's 27th-longest main span and is the longest suspension bridge between anchorages in the Western Hemisphere. The Mackinac Bridge is part of Interstate 75 (I-75) and carries the Lake Michigan and Huron components of the Great Lakes Circle Tour across the straits; it is also a segment of the U.S. North Country National Scenic Trail. The bridge connects the city of St. Ignace to the north with the village of Mackinaw City to the south. Envisioned since the 1880s, the bridge was designed by the engineer David B. Steinman and completed in 1957 only after many decades of struggles to begin construction. The bridge has since become an iconic symbol of the state of Michigan. Length The bridge opened on November 1, 1957, connecting two peninsulas linked for decades by ferries. At the time, the bridge was formally dedicated as the "world's longest suspension bridge between anchorages", allowing a superlative comparison to the Golden Gate Bridge, which has a longer center span between towers, and the San Francisco–Oakland Bay Bridge, which has an anchorage in the middle. It remains the longest suspension bridge with two towers between anchorages in the Western Hemisphere. Much longer anchorage-to-anchorage spans have been built in the Eastern Hemisphere, including the Akashi Kaikyō Bridge in Japan (), but the long leadups to the anchorages on the Mackinac make its total shoreline-to-shoreline length of 26,372 feet— short of —longer than the Akashi Kaikyo (). The length of the bridge's main span is , which makes it the third-longest suspension span in the United States and 27th longest suspension span worldwide. It is also one of the world's longest bridges overall. History Early history The Algonquian peoples who lived in the straits area prior to the arrival of Europeans in the 17th century called this region Michilimackinac, which is widely understood to mean Place of the Great Turtle. This is thought to refer to the shape of what is now called Mackinac Island. This interpretation of the word is debated by scholars. Trading posts at the Straits of Mackinac attracted peak populations during the summer trading season; they also developed as intertribal meeting places. As usage of the state's mineral and timber resources increased during the 19th century, the area became an important transport hub. In 1881 the three railroads that reached the Straits, the Michigan Central, Grand Rapids & Indiana, and the Detroit, Mackinac & Marquette, jointly established the Mackinac Transportation Company to operate a railroad car ferry service across the straits and connect the two peninsulas. Improved highways along the eastern shores of the Lower Peninsula brought increased automobile traffic to the Straits region starting in the 1910s. The state of Michigan initiated an automobile ferry service between Mackinaw City and St. Ignace in 1923; it eventually operated nine ferry boats that would carry as many as 9,000 vehicles per day. Traffic backups could stretch as long as . Plans for the bridge After the opening of the Brooklyn Bridge in 1883, local residents began to imagine that such a structure could span the straits. In 1884, a store owner in St. Ignace published a newspaper advertisement that included a reprint of an artist's conception of the Brooklyn Bridge with the caption "Proposed bridge across the Straits of Mackinac". The idea of the bridge was discussed in the Michigan Legislature as early as the 1880s. At the time, the Straits of Mackinac area was becoming a popular tourist destination, especially following the creation of Mackinac National Park on Mackinac Island in 1875. At a July 1888 meeting of the board of directors of the Grand Hotel on Mackinac Island, Cornelius Vanderbilt II proposed that a bridge be built across the straits, of a design similar to the one then under construction across the Firth of Forth in Scotland. This would advance commerce in the region and help lengthen the resort season of the hotel. Decades went by with no formal action. In 1920, the Michigan state highway commissioner advocated construction of a floating tunnel across the Straits. At the invitation of the state legislature, C. E. Fowler of New York City put forth a plan for a long series of causeways and bridges across the straits from Cheboygan, southeast of Mackinaw City, to St. Ignace, using Bois Blanc, Round, and Mackinac islands as intermediate steps. Formal planning In 1923, the state legislature ordered the State Highway Department to establish ferry service across the strait. More and more people used ferries to cross the straits each year, and as they did, the movement to build a bridge increased. Chase Osborn, a former governor, wrote: By 1928, the ferry service had become so popular and so expensive to operate that Governor Fred W. Green ordered the department to study the feasibility of building a bridge across the strait. The department deemed the idea feasible, estimating the cost at $30 million (equivalent to $ in ). In 1934, the Michigan Legislature created the Mackinac Straits Bridge Authority to explore possible methods of constructing and funding the proposed bridge. The Legislature authorized the Authority to seek financing for the project. In the mid-1930s, during the Great Depression, when numerous infrastructure projects received federal aid, the Authority twice attempted to obtain federal funds for the project but was unsuccessful. The United States Army Corps of Engineers and President Franklin D. Roosevelt endorsed the project but Congress never appropriated funds. Between 1936 and 1940, the Authority selected a route for the bridge based on preliminary studies. Borings were made for a detailed geological study of the route. The preliminary plans for the bridge featured a three-lane roadway, a railroad crossing on the underdeck of the span, and a center-anchorage double-suspension bridge configuration similar to the design of the San Francisco–Oakland Bay Bridge. Because this would have required sinking an anchorage pier in the deepest area of the Straits, the practicality of this design may have been questionable. A concrete causeway, approximately , extending from the northern shore, was constructed in shallow water from 1939 to 1941. However, a unique engineering challenge was created by the tremendous forces that operate against the base of the bridge, because the lakes freeze during the winter, causing large icebergs to place enormous stress on the bridge. At that time, with funding for the project still uncertain, further work was put on hold because of the outbreak of World War II. The Mackinac Straits Bridge Authority was abolished by the state legislature in 1947, but the same body created a new Mackinac Bridge Authority three years later in 1950. In June 1950, engineers were retained for the project. By then, it was reported that cars queuing for the ferry at Mackinaw City did not reach St. Ignace until five hours later, and the typical capacity of 460 vehicles per hour could not match the estimated 1,600 for a bridge. After a report by the engineers in January 1951, the state legislature authorized the sale of $85 million (equivalent to $ in ) in bonds for bridge construction on April 30, 1952. However, a weak bond market in 1953 forced a delay of more than a year before the bonds could be issued. Engineering and construction David B. Steinman was appointed as the design engineer in January 1953 and by the end of 1953, estimates and contracts had been negotiated. A civil engineer at the firm, Abul Hasnat, did the preliminary plans for the bridge. Total cost estimate at that time was $95 million (equivalent to $ in ) with estimated completion by November 1, 1956. Tolls collected were to pay for the bridge in 20 years. Construction began on May 7, 1954. The bridge was built under two major contracts. The Merritt-Chapman and Scott Corporation of New York was awarded the contract for all major substructure work for $25.7 million (equivalent to $ in ), while the American Bridge Division of United States Steel Corporation was awarded a contract of more than $44 million (equivalent to $ in ) to build the steel superstructure. Construction, staged using the 1939–1941 causeway, took three and a half years (four summers, no winter construction) at a total cost of $100 million and the lives of five workers. Contrary to popular belief, none of them are entombed in the bridge. It opened to traffic on schedule on November 1, 1957, and the ferry service ceased on the same day. The bridge was formally dedicated on June 25, 1958. G. Mennen Williams was governor during the construction of the Mackinac Bridge. He began the tradition of the governor leading the Mackinac Bridge Walk across it every Labor Day. Senator Prentiss M. Brown has been called the "father of the Mackinac Bridge", and was honored with a special memorial bridge token created by the Mackinac Bridge Authority. The bridge officially achieved its 100 millionth crossing exactly 40 years after its dedication, on June 25, 1998. The 50th anniversary of the bridge's opening was celebrated on November 1, 2007, in a ceremony hosted by the Mackinac Bridge Authority at the viewing park adjacent to the St. Ignace causeway. The bridge was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2010. History of the bridge's design The design of the Mackinac Bridge was directly influenced by the lessons from the first Tacoma Narrows Bridge, which failed in 1940 because of its instability in high winds. Three years after that disaster, Steinman had published a theoretical analysis of suspension-bridge stability problems, which recommended that future bridge designs include deep stiffening trusses to support the bridge deck and an open-grid roadway to reduce its wind resistance. Both of these features were incorporated into the design of the Mackinac Bridge. The stiffening truss is open to reduce wind resistance. The road deck is shaped as an airfoil to provide lift in a cross wind, and the center two lanes are open grid to allow vertical (upward) air flow, which fairly precisely cancels the lift, making the roadway stable in design in winds of up to . Facts and figures The Mackinac Bridge is a toll bridge on Interstate 75 (I-75). The US Highway 27 (US 27) designation was initially extended across the bridge. In November 1960, sections of I-75 freeway opened from Indian River north to the southern bridge approaches in Mackinaw City, and US 27 was removed from the bridge. It is one of only three segments of I-75 that are tolled, the others being the American half of the International Bridge near Sault Ste. Marie, Michigan, and Alligator Alley in Florida. The current toll is $4.00 for automobiles and $5.00 per axle for trucks. The Mackinac Bridge Authority raised the toll in 2007 to fund a $300 million renovation program, which would include completely replacing the bridge deck. Painting of the bridge takes seven years, and when painting of the bridge is complete, it begins again. The current painting project began in 1999 and was expected to take 20 years to complete because the lead-based paint needs to be removed, incurring additional disposal requirements. The bridge celebrated its 150 millionth vehicle crossing on September 6, 2009. Length from cable bent pier to cable bent pier: . Total width of the roadway: Two outside lanes: wide each Two inside lanes: wide each Center mall: Catwalk, curb and rail width: on each side Width of stiffening truss in the suspended span: . Depth of stiffening truss: Height of the roadway at mid-span: approximately above water level. Vertical clearance at normal temperature: at the center of the main suspension span. at the boundaries of the navigation channel. Construction cost: $99.8 million (equivalent to $ in ) Height of towers above water: Max. depth of towers below water: Depth of water beneath the center of the bridge, Main cables: Number of wires in each cable: 12,580 Diameter of each wire: Diameter of each cable: Total length of wire in main cables: . Total vehicle crossings, 2005: 4,236,491 (average 11,608 per day) Speed limit: for passenger cars, for heavy trucks. Heavy trucks are also required to leave a spacing ahead. Work and major accident fatalities Five workers died during the construction of the bridge: Diver Frank Pepper ascended too quickly from a depth of on September 16, 1954. Despite being rushed to a decompression chamber, the 46-year-old died from the bends. 26-year-old James LeSarge lost his balance on October 10, 1954, and fell into a caisson. He fell and likely died of head injuries caused by impact with the criss-crossing steel beams inside the caisson. Albert Abbott died on October 25, 1954. The forty-year-old fell into the water while working on an wide beam. Witnesses speculate he suffered a heart attack. 28-year-old Jack Baker and 28-year-old Robert Koppen died in a catwalk collapse near the north tower on June 6, 1956; it was their first day on the job. Koppen's body was never recovered. Another man suffered a broken ankle. All five men are memorialized on a plaque near the bridge's northern end (Bridge View Park). Contrary to folklore, no bodies are embedded in the concrete. One worker has died since the bridge was completed. Daniel Doyle fell from scaffolding on August 7, 1997. He survived the fall but fell victim to the water temperature. His body was recovered the next day in of water. Two vehicles have fallen off the bridge: On September 22, 1989, Leslie Ann Pluhar died when her car, a 1987 Yugo, plunged over the railing. High winds were initially blamed, which was not supported by recorded wind speed measurements taken on and around the bridge at the time of the accident. Later investigation showed the driver lost control due to excessive speed and her vehicle bumped the bridge's 4-inch-high median and then crossed back through the northbound lanes, hitting a curb, jumping an outer guardrail and falling off the bridge, On March 2, 1997, Richard Alan Daraban drove his car, a 1996 Ford Bronco, over the edge. It was later determined to be a suicide. On September 10, 1978, a small private plane carrying United States Marine Corps Reserve officers Maj. Virgil Osborne, Capt. James Robbins, and Capt. Wayne W. Wisbrock smashed into one of the bridge's suspension cables while flying in a heavy fog. The impact tore the wings off the plane, which then plunged into the Straits of Mackinac. All three men were killed. With the exception of the annual Mackinac Bridge Walk on Labor Day, the bridge is not accessible to pedestrians. As a result, suicides by jumping from the bridge have been rare, with the most recent confirmed case taking place on December 31, 2012. No jumps have occurred during the annual bridge walks. There have been roughly a dozen suicides by people jumping off the bridge . Crossing the bridge Some individuals have difficulty crossing bridges, a phenomenon known as gephyrophobia. The Mackinac Bridge Authority has a Drivers Assistance Program that provides drivers for those with gephyrophobia, or anyone who is more comfortable having someone else drive them across. More than a thousand people use this service every year. Those interested can arrange, either by phone or with the toll collector, to have their cars or motorcycles driven to the other end. There is an additional fee for this service. Bicycles and pedestrians are not permitted on the bridge. However, A program is offered to transport bicycles. Up until 2017, an exception was allowed for riders of two annual bicycle tours. A yearly exception is also made for pedestrians, see "Bridge Walk" below. Travelers across the Mackinac Bridge can listen to an AM radio broadcast that recounts the history of the bridge and provides updates on driving conditions. Bridge Walk The first Mackinac Bridge Walk was held in 1958, when it was led by Governor G. Mennen Williams. The first walk was held during the Bridge's Dedication Ceremony held in late June, and has been held on Labor Day since 1959. Until 2018, school buses from local districts transported walkers from Mackinaw City to St. Ignace to begin the walk. Thousands of people, traditionally led by the governor of Michigan, cross the five-mile (8 km) span on foot from St. Ignace to Mackinaw City. Before 1964, people walked the Bridge from Mackinaw City to St. Ignace. Prior to 2017, two lanes of the bridge would remain open to public vehicle traffic; this policy was changed in 2017 to close the entire bridge to public vehicle traffic for the duration of the event. The Bridge Walk is the only day of the year that hikers can hike this section of the North Country National Scenic Trail. Tourism During the summer months, the Upper Peninsula and the Mackinac Bridge have become a major tourist destination. In addition to visitors to Mackinac Island, the bridge has attracted interest from a diverse group of tourists including bridge enthusiasts, bird-watchers, and photographers. The Straits area is a popular sailing destination for boats of all types, which make it easier to get a closer view to the underlying structure of the bridge. In media On June 25, 1958, to coincide with that year's celebration of the November 1957 opening, the United States Postal Service (USPS) released a 3¢ commemorative stamp featuring the recently completed bridge. It was entitled "Connecting the Peninsulas of Michigan" and 107,195,200 copies were issued. The USPS again honored the Mackinac Bridge as the subject of its 2010 priority mail $4.90 stamp, which went on sale February 3. The bridge authority and MDOT unveiled the stamp, which featured a "seagull's-eye view" of the landmark, with a passing freighter below. Artist Dan Cosgrove worked from panoramic photographs to create the artwork. This is one of several designs that Cosgrove has produced for the USPS. On April 24, 1959, Captain John S. Lappo, an officer in the Strategic Air Command, operating from Lockbourne AFB flew his Boeing B-47 Stratojet beneath the bridge. Following a general court-martial, he was grounded for life. A feature-length documentary entitled Building the Mighty Mac was produced by Hollywood filmmaker Mark Howell in 1997 and was shown on PBS. The program features numerous interviews with the key people who built the structure and includes restored 16mm color footage of the bridge's construction. The history and building of the bridge was featured in a 2003 episode of the History Channel TV show Modern Marvels. On July 19, 2007, the Detroit Science Center unveiled an , scale model of the Mackinac Bridge. The exhibit was part of the state's 50th anniversary celebration of the bridge. Sherwin-Williams supplied authentic Mackinac Bridge-colored paint for the project. The bridge and its maintenance crew were featured in an episode of the Discovery Channel TV show Dirty Jobs on August 7, 2007. Host Mike Rowe and crew spent several days filming the episode in May 2007. MDOT also featured the bridge on the cover of the 2007 state highway map to celebrate its 50th anniversary.
Technology
Bridges
null
20354
https://en.wikipedia.org/wiki/Month
Month
A month is a unit of time, used with calendars, that is approximately as long as a natural phase cycle of the Moon; the words month and Moon are cognates. The traditional concept of months arose with the cycle of Moon phases; such lunar months ("lunations") are synodic months and last approximately 29.53 days, making for roughly 12.37 such months in one Earth year. From excavated tally sticks, researchers have deduced that people counted days in relation to the Moon's phases as early as the Paleolithic age. Synodic months, based on the Moon's orbital period with respect to the Earth–Sun line, are still the basis of many calendars today and are used to divide the year. Calendars that developed from the Roman calendar system, such as the internationally used Gregorian calendar, divide the year into 12 months, each of which lasts between 28 and 31 days. The names of the months were Anglicized from various Latin names and events important to Rome, except for the months 9–12, which are named after the Latin numerals 7–10 (septem, octo, novem, and decem) because they were originally the seventh through tenth months in the Roman calendar. In the modern Gregorian calendar, the only month with a variable number of days is the second month, February, which has 29 days during a leap year and 28 days otherwise. Types of months in astronomy The following types of months are mainly of significance in astronomy. Most of them (but not the distinction between sidereal and tropical months) were first recognized in Babylonian lunar astronomy. The sidereal month is defined as the Moon's orbital period in a non-rotating frame of reference (which on average is equal to its rotation period in the same frame). It is about 27.32166 days (27 days 7 hours 43 minutes 11.6 seconds). It is closely equal to the time it takes the Moon to twice pass a "fixed" star (different stars give different results because all have a very small proper motion and are not really fixed in position). A synodic month is the most familiar lunar cycle, defined as the time interval between two consecutive occurrences of a particular phase (such as new moon or full moon) as seen by an observer on Earth. The mean length of the synodic month is 29.53059 days (29 days, 12 hours, 44 minutes, 2.8 seconds). Due to the eccentricity of the lunar orbit around Earth (and to a lesser degree, the Earth's elliptical orbit around the Sun), the length of a synodic month can vary by up to seven hours. The tropical month is the average time for the Moon to pass twice through the same equinox point of the sky. It is 27.32158 days, very slightly shorter than the sidereal month (27.32166) days, because of precession of the equinoxes. An anomalistic month is the average time the Moon takes to go from perigee to perigee—the point in the Moon's orbit when it is closest to Earth. An anomalistic month is about 27.55455 days on average. The draconic month, draconitic month, or nodal month is the period in which the Moon returns to the same node of its orbit; the nodes are the two points where the Moon's orbit crosses the plane of the Earth's orbit. Its duration is about 27.21222 days on average. A synodic month is longer than a sidereal month because the Earth-Moon system is orbiting the Sun in the same direction as the Moon is orbiting the Earth. The Sun moves eastward with respect to the stars (as does the Moon) and it takes about 2.2 days longer for the Moon to return to the same apparent position with respect to the Sun. An anomalistic month is longer than a sidereal month because the perigee moves in the same direction as the Moon is orbiting the Earth, one revolution in nine years. Therefore, the Moon takes a little longer to return to perigee than to return to the same star. A draconic month is shorter than a sidereal month because the nodes move in the opposite direction as the Moon is orbiting the Earth, one revolution in 18.6 years. Therefore, the Moon returns to the same node slightly earlier than it returns to the same star. Calendrical consequences At the simplest level, most well-known lunar calendars are based on the initial approximation that 2 lunations last 59 solar days: a 30-day full month followed by a 29-day hollow month — but this is only roughly accurate and regularly needs intercalation (correction) by a leap day. Additionally, the synodic month does not fit easily into the solar (or 'tropical') year, which makes accurate, rule-based lunisolar calendars that combine the two cycles complicated. The most common solution to this problem is the Metonic cycle, which takes advantage of the fact that 235 lunations are approximately 19 tropical years (which add up to not quite 6,940 days): 12 years have 12 lunar months, and 7 years are 13 lunar months long. However, a Metonic calendar based year will drift against the seasons by about one day every 2 centuries. Metonic calendars include the calendar used in the Antikythera Mechanism about 21 centuries ago, and the Hebrew calendar. Alternatively in a pure lunar calendar, years are defined as having always 12 lunations, so a year is 354 or 355 days long: the Islamic calendar is the prime example. Consequently, an Islamic year is about 11 days shorter than a solar year and cycles through the seasons in about 33 solar = 34 lunar years: the Islamic New Year has a different Gregorian calendar date in each (solar) year. Purely solar calendars often have months which no longer relate to the phase of the Moon, but are based only on the motion of the Sun relative to the equinoxes and solstices, or are purely conventional like in the widely used Gregorian calendar. The complexity required in an accurate lunisolar calendar may explain why solar calendars have generally replaced lunisolar and lunar calendars for civil use in most societies. Months in various calendars Beginning of the lunar month The Hellenic calendars, the Hebrew Lunisolar calendar and the Islamic Lunar calendar started the month with the first appearance of the thin crescent of the new moon. However, the motion of the Moon in its orbit is very complicated and its period is not constant. The date and time of this actual observation depends on the exact geographical longitude as well as latitude, atmospheric conditions, the visual acuity of the observers, etc. Therefore, the beginning and lengths of months defined by observation cannot be accurately predicted. While some like orthodox Islam and the Jewish Karaites still rely on actual moon observations, reliance on astronomical calculations and tabular methods is increasingly common in practice. Ahom calendar There are 12 months and an additional leap year month in the Ahom sexagenary calendar known as Lak-ni. The first month is Duin Shing. Roman calendar The Roman calendar was reformed several times, the last three enduring reforms during historical times. The last three reformed Roman calendars are called the Julian, Augustan, and Gregorian; all had the same number of days in their months. Despite other attempts, the names of the months after the Augustan calendar reform have persisted, and the number of days in each month (except February) have remained constant since before the Julian reform. The Gregorian calendar, like the Roman calendars before it, has twelve months, whose Anglicized names are: {| class="wikitable sortable" |- style="vertical-align:bottom;" ! Order !! Name !! Numberof days |- style="text-align:center;" | 1 |style="text-align:left;"| January || 31 |- style="text-align:center;" | 2 |style="text-align:left;"| February | 28 |- style="text-align:center;" | 3 |style="text-align:left;"| March || 31 |- style="text-align:center;" | 4 |style="text-align:left;"| April || 30 |- style="text-align:center;" | 5 |style="text-align:left;"| May || 31 |- style="text-align:center;" | 6 |style="text-align:left;"| June || 30 |- a style="text-align:center;" | 7 |style="text-align:left;"| July | 31 |- style="text-align:center;" | 8 |style="text-align:left;"| August | 31 |- style="text-align:center;" | 9 |style="text-align:left;"| September || 30 |- style="text-align:center;" | 10 |style="text-align:left;"| October || 31 |- style="text-align:center;" | 11 |style="text-align:left;"| November || 30 |- style="text-align:center;" | 12 |style="text-align:left;"| December || 31 |} The famous mnemonic Thirty days hath September is a common way of teaching the lengths of the months in the English-speaking world. The knuckles of the four fingers of one's hand and the spaces between them can be used to remember the lengths of the months. By making a fist, each month will be listed as one proceeds across the hand. All months landing on a knuckle are 31 days long and those landing between them are 30 days long, with variable February being the remembered exception. When the knuckle of the index finger is reached (July), go over to the first knuckle on the other fist, held next to the first (or go back to the first knuckle) and continue with August. This physical mnemonic has been taught to primary school students for many decades, if not centuries. This cyclical pattern of month lengths matches the musical keyboard alternation of wide white keys (31 days) and narrow black keys (30 days). The note F corresponds to January, the note F corresponds to February, the exceptional 28–29 day month, and so on. Numerical relations The mean month-length in the Gregorian calendar is 30.436875 days. Any five consecutive months, that do not include February, contain 153 days. Calends, nones, and ides Months in the pre-Julian Roman calendar included: Intercalaris an intercalary month occasionally embedded into February, to realign the calendar. Quintilis, later renamed to Julius in honour of Julius Caesar. Sextilis, later renamed to Augustus in honour of Augustus. The Romans divided their months into three parts, which they called the calends, the nones, and the ides. Their system is somewhat intricate. The ides occur on the thirteenth day in eight of the months, but in March, May, July, and October, they occur on the fifteenth. The nones always occur 8 days (one Roman 'week') before the ides, i.e., on the fifth or the seventh. The calends are always the first day of the month, and before Julius Caesar's reform fell sixteen days (two Roman weeks) after the ides (except the ides of February and the intercalary month). Relations between dates, weekdays, and months in the Gregorian calendar Within a month, the following dates fall on the same day of the week: 01, 08, 15, 22, and 29 (e.g., in January 2022, all these dates fell on a Saturday) 02, 09, 16, 23, and 30 (e.g., in January 2022, all these dates fell on a Sunday) 03, 10, 17, 24, and 31 (e.g., in January 2022, all these dates fell on a Monday) 04, 11, 18, and 25 (e.g., in January 2022, all these dates fell on a Tuesday) 05, 12, 19, and 26 (e.g., in January 2022, all these dates fell on a Wednesday) 06, 13, 20, and 27 (e.g., in January 2022, all these dates fell on a Thursday) 07, 14, 21, and 28 (e.g., in January 2022, all these dates fell on a Friday) Some months have the same date/weekday structure. In a non-leap year: January/October (e.g., in 2022, they began on a Saturday) February/March/November (e.g., in 2022, they began on a Tuesday) April/July (e.g., in 2022, they began on a Friday) September/December (e.g., in 2022, they began on a Thursday) 1 January and 31 December fall on the same weekday (e.g. in 2022 on a Saturday) In a leap year: February/August (e.g., in 2020, they began on a Saturday) March/November (e.g., in 2020, they began on a Sunday) January/April/July (e.g., in 2020, they began on a Wednesday) September/December (e.g., in 2020, they began on a Tuesday) 29 February (the leap day) falls on the same weekday like 1, 8, 15, 22 February and 1 August (see above; e.g. in 2020 on a Saturday) Hebrew calendar The Hebrew calendar has 12 or 13 months. Nisan, 30 days ניסן Iyar, 30 days אייר Sivan, 30 days סיון Tammuz, 29 days תמוז Av, 30 days אב Elul, 29 days אלול Tishri, 30 days תשרי Marcheshvan, 29/30 days מַרְחֶשְׁוָן Kislev, 30/29 days כסלו Tevet, 29 days טבת Shevat, 30 days שבט Adar 1, 30 days, intercalary month אדר א Adar 2, 29 days אדר ב Adar 1 is only added 7 times in 19 years. In ordinary years, Adar 2 is simply called Adar. Islamic calendar There are also twelve months in the Islamic calendar. They are named as follows: Muharram (Restricted/sacred) محرّم Safar (Empty/Yellow) صفر Rabī' al-Awwal/Rabi' I (First Spring) ربيع الأول Rabī' ath-Thānī/Rabi' al-Aakhir/Rabi' II (Second spring or Last spring) ربيع الآخر أو ربيع الثاني Jumada al-Awwal/Jumaada I (First Freeze) جمادى الأول Jumada ath-Thānī or Jumādā al-Thānī/Jumādā II (Second Freeze or Last Freeze) جمادى الآخر أو جمادى الثاني Rajab (To Respect) رجب Sha'bān (To Spread and Distribute) شعبان Ramadān (Parched Thirst) رمضان Shawwāl (To Be Light and Vigorous) شوّال Dhu al-Qi'dah (The Master of Truce) ذو القعدة Dhu al-Hijjah (The Possessor of Hajj) ذو الحجة See Islamic calendar for more information on the Islamic calendar. Arabic calendar Hindu calendar The Hindu calendar has various systems of naming the months. The months in the lunar calendar are: These are also the names used in the Indian national calendar for the newly redefined months. Purushottam Maas or Adhik Maas (translit. = 'extra', = 'month') is an extra month in the Hindu calendar that is inserted to keep the lunar and solar calendars aligned. "Purushottam" is an epithet of Vishnu, to whom the month is dedicated. The names in the solar calendar are just the names of the zodiac sign in which the sun travels. They are Mesha Vrishabha Mithuna Kataka Simha Kanyaa Tulaa Vrishcika Dhanus Makara Kumbha Miina Baháʼí calendar The Baháʼí calendar is the calendar used by the Baháʼí Faith. It is a solar calendar with regular years of 365 days, and leap years of 366 days. Years are composed of 19 months of 19 days each (361 days), plus an extra period of "Intercalary Days" (4 in regular and 5 in leap years). The months are named after the attributes of God. Days of the year begin and end at sundown. Iranian calendar (Persian calendar) The Iranian / Persian calendar, currently used in Iran, also has 12 months. The Persian names are included in the parentheses. It begins on the northern Spring equinox. Farvardin (31 days, فروردین) Ordibehesht (31 days, اردیبهشت) Khordad (31 days, خرداد) Tir (31 days, تیر) Mordad (31 days, مرداد) Shahrivar (31 days, شهریور) Mehr (30 days, مهر) Aban (30 days, آبان) Azar (30 days, آذر) Dey (30 days, دی) Bahman (30 days, بهمن) Esfand (29 days- 30 days in leap year, اسفند) Reformed Bengali calendar The Bengali calendar, used in Bangladesh, follows solar months and it has six seasons. The months and seasons in the calendar are: === Nanakshahi calendar === The months in the Nanakshahi calendar are: Khmer calendar Different from the Hindu calendar, the Khmer calendar consists of both a lunar calendar and a solar calendar. The solar is used more commonly than the lunar calendar. The Khmer lunar calendar most often contains 12 months; however, the eighth month is repeated (as a "leap month") every two or three years, making 13 months instead of 12. Each lunar month has 29 or 30 days. The year normally has then 354 or 384 days (when an intercalary month is added), but the calendar follows the rules of the Gregorian calendar to determine leap years and add a lead day to one month, so the Khmer lunar year may have a total of 354, 355, 384 or 385 days. Thai calendar Tongan calendar The Tongan calendar is based on the cycles of the Moon around the Earth in one year. The months are: Liha Mu'a Liha Mui Vai Mu'a Vai Mui Faka'afu Mo'ui Faka'afu Mate Hilinga Kelekele Hilinga Mea'a 'Ao'ao Fu'ufu'unekinanga 'Uluenga Tanumanga 'O'oamofanongo Pingelapese Pingelapese, a language from Micronesia, also uses a lunar calendar. There are 12 months associated with their calendar. The Moon first appears in March, they name this month Kahlek. This system has been used for hundreds of years and throughout many generations. This calendar is cyclical and relies on the position and shape of the Moon. Kollam era (Malayalam) calendar Sinhalese calendar The Sinhalese calendar is the Buddhist calendar in Sri Lanka with Sinhala names. Each full moon Poya day marks the start of a Buddhist lunar month. The first month is Bak. Duruthu (දුරුතු) Navam (නවම්) Mædin (මැදින්) Bak (බක්) Vesak (වෙසක්) Poson (පොසොන්) Æsala (ඇසල) Nikini (නිකිණි) Binara (බිනර) Vap (වප්) Il (iL) (ඉල්) Unduvap (උඳුවප්) Germanic calendar The old Icelandic calendar is not in official use anymore, but some Icelandic holidays and annual feasts are still calculated from it. It has 12 months, broken down into two groups of six often termed "winter months" and "summer months". The calendar is peculiar in that the months always start on the same weekday rather than on the same date. Hence Þorri always starts on a Friday sometime between January 22 and January 28 (Old style: January 9 to January 15), Góa always starts on a Sunday between February 21 and February 27 (Old style: February 8 to February 14). Skammdegi ("Short days") Gormánuður (mid-October – mid-November, "slaughter month" or "Gór's month") Ýlir (mid-November – mid-December, "Yule month") Mörsugur (mid-December – mid-January, "fat sucking month") Þorri (mid-January – mid-February, "frozen snow month") Góa (mid-February – mid-March, "Góa's month, see Nór") Einmánuður (mid-March – mid-April, "lone" or "single month") Náttleysi ("Nightless days") Harpa (mid-April – mid-May, Harpa is a female name, probably a forgotten goddess, first day of Harpa is celebrated as Sumardagurinn fyrsti – first day of summer) Skerpla (mid-May – mid-June, another forgotten goddess) Sólmánuður (mid-June – mid-July, "sun month") Heyannir (mid-July – mid-August, "hay business month") Tvímánuður (mid-August – mid-September, "two" or "second month") Haustmánuður (mid-September – mid-October, "autumn month") Old Georgian calendar *NOTE: New Year in ancient Georgia started from September. Old Swedish calendar Torsmånad (January, 'Torre's month' (ancient god)) Göjemånad (February, 'Goe's month' (ancient goddess)) Vårmånad (March, 'Spring month') Gräsmånad (April, 'Grass month') Blomstermånad (May, 'Bloom month') Sommarmånad (June, 'Summer month') Hömånad (July, 'Hay month') Skördemånad, Rötmånad (August, 'Harvest month' or 'Rot month') Höstmånad (September, 'Autumn month') Slaktmånad (October, 'Slaughter month') Vintermånad (November, 'Winter month') Julmånad (December, 'Christmas month') Old English calendar Like the Old Norse calendar, the Anglo-Saxons had their own calendar before they were Christianized which reflected native traditions and deities. These months were attested by Bede in his works On Chronology and The Reckoning of Time written in the 8th century. His Old English month names are probably written as pronounced in Bede's native Northumbrian dialect. The months were named after the Moon; the new moon marking the end of an old month and start of a new month; the full moon occurring in the middle of the month, after which the whole month took its name. {| class="wikitable" |+ Old English month names from Bede's The Reckoning of Time |- ! Year order ! Northumbrian Old English ! Modern English transliteration ! Roman equivalent |- style="vertical-align:top;" |style="text-align:center;"| 1 | Æfterra-ġēola mōnaþ || “After-Yule month” | January |- style="vertical-align:top;" |style="text-align:center;"| 2 | Sol-mōnaþ || “Sol month” | February |- style="vertical-align:top;" |style="text-align:center;"| 3 | Hrēð-mōnaþ || “Hreth month” | March |- style="vertical-align:top;" |style="text-align:center;"| 4 | Ēostur-mōnaþ || “Ēostur month” | April |- style="vertical-align:top;" |style="text-align:center;"| 5 | Ðrimilce-mōnaþ || “Three-milkings month”     | May |- style="vertical-align:top;" |style="text-align:center;"| 6 | Ærra-Liþa || “Ere-Litha” | June |- style="vertical-align:top;" |style="text-align:center;"| 7 | Æftera-Liþa || “After-Litha” | July |- style="vertical-align:top;" |style="text-align:center;"| 8 | Weōd-mōnaþ || “Weed month” | August |- style="vertical-align:top;" |style="text-align:center;"| 9 | Hāliġ-mōnaþ Hærfest-mōnaþ | “Holy month” “Harvest month” | September |- style="vertical-align:top;" |style="text-align:center;"| 10 | Winter-fylleþ || “Winter-filleth” | October |- style="vertical-align:top;" |style="text-align:center;"| 11 | Blōt-mōnaþ || “Blót month” | November |- style="vertical-align:top;" |style="text-align:center;"| 12 | Ærra-ġēola mōnaþ || “Ere-Yule” | December |} When an intercalary month was needed, a third Litha month was inserted in mid-summer. Old Celtic calendar The Coligny calendar (Gaulish/Celtic) is an Iron Age Metonic lunisolar calendar, with 12 lunar months of either 29 or 30 days. The lunar month is calculated to a precision of within 24 hours of the lunar phase, achieved by a particular arrangement of months, and the month of EQUOS having a variable length of 29 or 30 days to adjust for any lunar slippage. This setup means the calendar could stay precisely aligned to its lunar phase indefinitely. The lunar month is divided into two halves, the first of 15 days and the second of 14 or 15 days. The month is calculated to start at the first quarter moon, with the full moon at the centre of the first half-month and the dark moon at the centre of the second half-month. The calendar does not rely on unreliable visual sightings. An intercalary lunar month is inserted before every 30 lunar months to keep in sync with the solar year. Every 276 years this adds one day to the solar point, so if for example the calendar was 1,000 years old, it would only have slipped by less than 4 days against the solar year. Old Hungarian calendar Nagyszombati kalendárium (in Latin: Calendarium Tyrnaviense) from 1579. Historically Hungary used a 12-month calendar that appears to have been zodiacal in nature but eventually came to correspond to the Gregorian months as shown below: Boldogasszony hava (January, 'month of the happy/blessed lady') Böjtelő hava (February, 'month of early fasting/Lent' or 'month before fasting/Lent') Böjtmás hava (March, 'second month of fasting/Lent') Szent György hava (April, 'Saint George's month') Pünkösd hava (May, 'Pentecost month') Szent Iván hava (June, 'Saint John [the Baptist]'s month') Szent Jakab hava (July, 'Saint James' month') Kisasszony hava (August, 'month of the Virgin') Szent Mihály hava (September, 'Saint Michael's month') Mindszent hava (October, 'all saints' month') Szent András hava (November, 'Saint Andrew's month') Karácsony hava (December, 'month of Yule/Christmas') Czech calendar Leden – derives from 'led' (ice) Únor – derives from 'nořit' (to dive, referring to the ice sinking into the water due to melting) Březen – derives from 'bříza' (birch) Duben – derives from 'dub' (oak) Květen – derives from 'květ' (flower) Červen – derives from 'červená' (red – for the color of apples and tomatoes) Červenec – is the second 'červen' (formerly known as 2nd červen) Srpen – derives from old Czech word 'sirpsti' (meaning to reflect, referring to the shine on the wheat) Září – means 'to shine' Říjen – derives from 'jelení říje', which refers to the estrous cycle of female elk Listopad – falling leaves Prosinec – derives from old Czech 'prosiněti', which means to shine through (refers to the sun light shining through the clouds) Old Egyptian calendar The ancient civil Egyptian calendar had a year that was 365 days long and was divided into 12 months of 30 days each, plus 5 extra days (epagomenes) at the end of the year. The months were divided into 3 "weeks" of ten days each. Because the ancient Egyptian year was almost a quarter of a day shorter than the solar year and stellar events "wandered" through the calendar, it is referred to as Annus Vagus or "Wandering Year". Thout Paopi Hathor Koiak Tooba Emshir Paremhat Paremoude Pashons Paoni Epip Mesori Nisga'a calendar The Nisga'a calendar coincides with the Gregorian calendar with each month referring to the type of harvesting that is done during the month. K'aliiyee = Going North – referring to the Sun returning to its usual place in the sky Buxwlaks = Needles Blowing About – February is usually a very windy month in the Nass River Valley Xsaak = To Eat Oolichans – Oolichans are harvested during this month Mmaal = Canoes – The river has defrosted, hence canoes are used once more Yansa'alt = Leaves are Blooming – Warm weather has arrived and leaves on the trees begin to bloom Miso'o = Sockeye – majority of Sockeye Salmon runs begin this month Maa'y = Berries – berry picking season Wii Hoon = Great Salmon – referring to the abundance of Salmon that are now running Genuugwwikw = Trail of the Marmot – Marmots, Ermines and animals as such are hunted Xlaaxw = To Eat Trout – trout are mostly eaten this time of year Gwilatkw = To Blanket – The earth is "blanketed" with snow Luut'aa = Sit In – the Sun "sits" in one spot for a period of time French Republican calendar This calendar was proposed during the French Revolution, and used by the French government for about twelve years from late 1793. There were twelve months of 30 days each, grouped into three ten-day weeks called décades. The five or six extra days needed to approximate the tropical year were placed after the months at the end of each year. A period of four years ending on a leap day was to be called a Franciade. It began at the autumn equinox: Autumn: Vendémiaire Brumaire Frimaire Winter: Nivôse Pluviôse Ventôse Spring: Germinal Floréal Prairial Summer: Messidor Thermidor Fructidor Eastern Ojibwe calendar Ojibwe month names are based on the key feature of the month. Consequently, months between various regions have different names based on the key feature of each month in their particular region. In the Eastern Ojibwe, this can be seen in when the sucker makes its run, which allows the Ojibwe to fish for them. Additionally, Rhodes also informs of not only the variability in the month names, but how in Eastern Ojibwe these names were originally applied to the lunar months the Ojibwe originally used, which was a lunisolar calendar, fixed by the date of Akiinaaniwan (typically December 27) that marks when sunrise is the latest in the Northern Hemisphere. {| class="wikitable" |- style="vertical-align:bottom;" !RomanMonth !Month inEastern Ojibwe !Englishtranslation !Original order in the Ojibwa year !Starting at the first full moon after: |- |rowspan=2|Januaryin those places that have a sucker run during that time |n[a]mebin-giizis |rowspan=2|sucker moon |rowspan=2| |rowspan=2|Akiinaaniwan on 27 December |- |n[a]meb[i]ni-giizis |- |February |[o]naab[a]ni-giizis |Crust-on-the-snow moon | |25 January |- |March |zii[n]z[i]baak[wa]doke-giizis |Sugaring moon | |26 February |- |rowspan=2|Aprilin those places that have a sucker run during that time |n[a]mebin-giizis |rowspan=2|sucker moon |rowspan=4| |rowspan=4|25 March |- |n[a]meb[i]ni-giizis |- |Aprilin those places that do not have a sucker run during that time |rowspan=2|waawaas[a]gone-giizis |rowspan=2|Flower moon |- |Mayin those places that have an April sucker run |- |Mayin those places that have a January sucker run |rowspan=2|g[i]tige-giizis |rowspan=2|Planting moon |rowspan=2| |rowspan=2|24 April |- |Junein those places that have an April sucker run |- |Junein those places that have a January sucker run |[o]deh[i]min-giizis |Strawberry moon | |23 May |- |July |miin-giizis |Blueberry moon | |22 June |- |August |[o]dat[a]gaag[o]min-giizis |Blackberry moon | |20 July |- |September |m[an]daamin-giizis |Corn moon | |18 August |- |rowspan=2|October |b[i]naakwe-giizis |Leaves-fall moon |rowspan=2| |rowspan=2|17 September |- |b[i]naakwii-giizis |Harvest moon |- |November |g[a]shkadin-giizis |Freeze-up moon | |16 October |- |December |g[i]chi-b[i]boon-giizis |Big-winter moon | |15 November |- |Januaryin those places that do not have a sucker run during that time |[o]shki-b[i]boon-gii[zi]soons |Little new-winter moon | | |}
Physical sciences
Time
null
20359
https://en.wikipedia.org/wiki/Mutagen
Mutagen
In genetics, a mutagen is a physical or chemical agent that permanently changes genetic material, usually DNA, in an organism and thus increases the frequency of mutations above the natural background level. As many mutations can cause cancer in animals, such mutagens can therefore be carcinogens, although not all necessarily are. All mutagens have characteristic mutational signatures with some chemicals becoming mutagenic through cellular processes. The process of DNA becoming modified is called mutagenesis. Not all mutations are caused by mutagens: so-called "spontaneous mutations" occur due to spontaneous hydrolysis, errors in DNA replication, repair and recombination. Discovery The first mutagens to be identified were carcinogens, substances that were shown to be linked to cancer. Tumors were described more than 2,000 years before the discovery of chromosomes and DNA; in 500 B.C., the Greek physician Hippocrates named tumors resembling a crab karkinos (from which the word "cancer" is derived via Latin), meaning crab. In 1567, Swiss physician Paracelsus suggested that an unidentified substance in mined ore (identified as radon gas in modern times) caused a wasting disease in miners, and in England, in 1761, John Hill made the first direct link of cancer to chemical substances by noting that excessive use of snuff may cause nasal cancer. In 1775, Sir Percivall Pott wrote a paper on the high incidence of scrotal cancer in chimney sweeps, and suggested chimney soot as the cause of scrotal cancer. In 1915, Yamagawa and Ichikawa showed that repeated application of coal tar to rabbit's ears produced malignant cancer. Subsequently, in the 1930s the carcinogen component in coal tar was identified as a polyaromatic hydrocarbon (PAH), benzo[a]pyrene. Polyaromatic hydrocarbons are also present in soot, which was suggested to be a causative agent of cancer over 150 years earlier. The association of exposure to radiation and cancer had been observed as early as 1902, six years after the discovery of X-ray by Wilhelm Röntgen and radioactivity by Henri Becquerel. Georgii Nadson and German Filippov were the first who created fungi mutants under ionizing radiation in 1925. The mutagenic property of mutagens was first demonstrated in 1927, when Hermann Muller discovered that x-rays can cause genetic mutations in fruit flies, producing phenotypic mutants as well as observable changes to the chromosomes, visible due to the presence of enlarged "polytene" chromosomes in fruit fly salivary glands. His collaborator Edgar Altenburg also demonstrated the mutational effect of UV radiation in 1928. Muller went on to use x-rays to create Drosophila mutants that he used in his studies of genetics. He also found that X-rays not only mutate genes in fruit flies, but also have effects on the genetic makeup of humans. Similar work by Lewis Stadler also showed the mutational effect of X-rays on barley in 1928, and ultraviolet (UV) radiation on maize in 1936. The effect of sunlight had previously been noted in the nineteenth century where rural outdoor workers and sailors were found to be more prone to skin cancer. Chemical mutagens were not demonstrated to cause mutation until the 1940s, when Charlotte Auerbach and J. M. Robson found that mustard gas can cause mutations in fruit flies. A large number of chemical mutagens have since been identified, especially after the development of the Ames test in the 1970s by Bruce Ames that screens for mutagens and allows for preliminary identification of carcinogens. Early studies by Ames showed around 90% of known carcinogens can be identified in Ames test as mutagenic (later studies however gave lower figures), and ~80% of the mutagens identified through Ames test may also be carcinogens. Difference between mutagens and carcinogens Mutagens are not necessarily carcinogens, and vice versa. Sodium azide for example may be mutagenic (and highly toxic), but it has not been shown to be carcinogenic. Meanwhile, compounds which are not directly mutagenic but stimulate cell growth which can reduce the effectiveness of DNA repair and indirectly increase the chance of mutations, and therefore that of cancer. One example of this would be anabolic steroids, which stimulate growth of the prostate gland and increase the risk of prostate cancer among others. Other carcinogens may cause cancer through a variety of mechanisms without producing mutations, such as tumour promotion, immunosuppression that reduces the ability to fight cancer cells or pathogens that can cause cancer, disruption of the endocrine system (e.g. in breast cancer), tissue-specific toxicity, and inflammation (e.g. in colorectal cancer). Difference between mutagens and DNA damaging agents A DNA damaging agent is an agent that causes a change in the structure of DNA that is not itself replicated when the DNA is replicated. Examples of DNA damage include a chemical addition or disruption of a nucleotide base in DNA (generating an abnormal nucleotide or nucleotide fragment), or a break in one or both strands in DNA. When duplex DNA containing a damaged base is replicated, an incorrect base may be inserted in the newly synthesized strand opposite the damaged base in the complementary template strand, and this can become a mutation in the next round of replication. Also a DNA double-strand break may be repaired by an inaccurate process leading to an altered base pair, a mutation. However, mutations and DNA damages differ in a fundamental way: mutations can, in principle, be replicated when DNA replicates, whereas DNA damages are not necessarily replicated. Thus DNA damaging agents often cause mutations as a secondary consequence, but not all DNA damages lead to mutation and not all mutations arise from a DNA damage. The term genotoxic means toxic (damaging) to DNA. Effects Mutagens can cause changes to the DNA and are therefore genotoxic. They can affect the transcription and replication of the DNA, which in severe cases can lead to cell death. The mutagen produces mutations in the DNA, and deleterious mutation can result in aberrant, impaired or loss of function for a particular gene, and accumulation of mutations may lead to cancer. Mutagens may therefore be also carcinogens. However, some mutagens exert their mutagenic effect through their metabolites, and therefore whether such mutagens actually become carcinogenic may be dependent on the metabolic processes of an organism, and a compound shown to be mutagenic in one organism may not necessarily be carcinogenic in another. Different mutagens act on DNA differently. Powerful mutagens may result in chromosomal instability, causing chromosomal breakages and rearrangement of the chromosomes such as translocation, deletion, and inversion. Such mutagens are called clastogens. Mutagens may also modify the DNA sequence; the changes in nucleic acid sequences by mutations include substitution of nucleotide base-pairs and insertions and deletions of one or more nucleotides in DNA sequences. Although some of these mutations are lethal or cause serious disease, many have minor effects as they do not result in residue changes that have significant effect on the structure and function of the proteins. Many mutations are silent mutations, causing no visible effects at all, either because they occur in non-coding or non-functional sequences, or they do not change the amino-acid sequence due to the redundancy of codons. Some mutagens can cause aneuploidy and change the number of chromosomes in the cell. They are known as aneuploidogens. In Ames test, where the varying concentrations of the chemical are used in the test, the dose response curve obtained is nearly always linear, suggesting that there may be no threshold for mutagenesis. Similar results are also obtained in studies with radiations, indicating that there may be no safe threshold for mutagens. However, the no-threshold model is disputed with some arguing for a dose rate dependent threshold for mutagenesis. Some have proposed that low level of some mutagens may stimulate the DNA repair processes and therefore may not necessarily be harmful. More recent approaches with sensitive analytical methods have shown that there may be non-linear or bilinear dose-responses for genotoxic effects, and that the activation of DNA repair pathways can prevent the occurrence of mutation arising from a low dose of mutagen. Types Mutagens may be of physical, chemical or biological origin. They may act directly on the DNA, causing direct damage to the DNA, and most often result in replication error. Some however may act on the replication mechanism and chromosomal partition. Many mutagens are not mutagenic by themselves, but can form mutagenic metabolites through cellular processes, for example through the activity of the cytochrome P450 system and other oxygenases such as cyclooxygenase. Such mutagens are called promutagens. Physical mutagens Ionizing radiations such as X-rays, gamma rays and alpha particles cause DNA breakage and other damages. The most common lab sources include cobalt-60 and cesium-137. Ultraviolet radiations with wavelength above 260 nm are absorbed strongly by bases, producing pyrimidine dimers, which can cause error in replication if left uncorrected. Radioactive decay, such as 14C in DNA which decays into nitrogen. Chemical mutagens Chemical mutagens either directly or indirectly damage DNA. On this basis, they are of 2 types: Directly acting chemical mutagens They directly damage DNA, but may or may not undergo metabolism to produce promutagens (metabolites that can have higher mutagenic potential than their substrates). Reactive oxygen species (ROS) – These may be superoxide, hydroxyl radicals and hydrogen peroxide, and large number of these highly reactive species are generated by normal cellular processes, for example as a by-products of mitochondrial electron transport, or lipid peroxidation. As an example of the latter, 15-hydroperoxyeicosatetraenoic acid, a natural product of cellular cyclooxygenases and lipoxygenases, breaks down to form 4-hydroxy-2(E)-nonenal, 4-hydroperoxy-2(E)-nonenal, 4-oxo-2(E)-nonenal, and cis-4,5-epoxy-2(E)-decanal; these bifunctional electophils are mutagenic in mammalian cells and may contribute to the development and/or progression of human cancers (see 15-Hydroxyicosatetraenoic acid). A number of mutagens may also generate these ROS. These ROS may result in the production of many base adducts, as well as DNA strand breaks and crosslinks. Deaminating agents, for example nitrous acid which can cause transition mutations by converting cytosine to uracil. Polycyclic aromatic hydrocarbons (PAH), when activated to diol-epoxides can bind to DNA and form adducts. Alkylating agents such as ethylnitrosourea. The compounds transfer methyl or ethyl group to bases or the backbone phosphate groups. Guanine when alkylated may be mispaired with thymine. Some may cause DNA crosslinking and breakages. Nitrosamines are an important group of mutagens found in tobacco, and may also be formed in smoked meats and fish via the interaction of amines in food with nitrites added as preservatives. Other alkylating agents include mustard gas and vinyl chloride. Aromatic amines and amides have been associated with carcinogenesis since 1895 when German physician Ludwig Rehn observed high incidence of bladder cancer among workers in German synthetic aromatic amine dye industry. 2-Acetylaminofluorene, originally used as a pesticide but may also be found in cooked meat, may cause cancer of the bladder, liver, ear, intestine, thyroid and breast. Alkaloid from plants, such as those from Vinca species, may be converted by metabolic processes into the active mutagen or carcinogen. Bromine and some compounds that contain bromine in their chemical structure. Sodium azide, an azide salt that is a common reagent in organic synthesis and a component in many car airbag systems Psoralen combined with ultraviolet radiation causes DNA cross-linking and hence chromosome breakage. Benzene, an industrial solvent and precursor in the production of drugs, plastics, synthetic rubber and dyes. Chromium trioxide, a highly toxic and oxidizing substance used in electroplating. Indirectly acting chemical mutagens They are not necessarily mutagenic by themselves, but they produce promutagens mutagenic compounds through metabolic processes in cells. Polyaromatic hydrocarbons (PAHs) Aromatic amines Benzene Some chemical mutagens additionally require UV or visible light activation for their mutagenic effect. These are the , which include furocoumarins and limettin. Base analogs Base analog, which can substitute for DNA bases during replication and cause transition mutations. Some examples are 5-bromouracil and 2-aminopurine. Intercalating agents Intercalating agents, such as ethidium bromide and proflavine, are molecules that may insert between bases in DNA, causing frameshift mutation during replication. Some such as daunorubicin may block transcription and replication, making them highly toxic to proliferating cells. Metals Many metals, such as arsenic, cadmium, chromium, nickel and their compounds may be mutagenic, but they may act, however, via a number of different mechanisms. Arsenic, chromium, iron, and nickel may be associated with the production of ROS, and some of these may also alter the fidelity of DNA replication. Nickel may also be linked to DNA hypermethylation and histone deacetylation, while some metals such as cobalt, arsenic, nickel and cadmium may also affect DNA repair processes such as DNA mismatch repair, and base and nucleotide excision repair. Biological agents Transposons, a section of DNA that undergoes autonomous fragment relocation/multiplication. Its insertion into chromosomal DNA disrupts functional elements of the genes. Oncoviruses – Virus DNA may be inserted into the genome and disrupts genetic function. Infectious agents have been suggested to cause cancer as early as 1908 by Vilhelm Ellermann and Oluf Bang, and 1911 by Peyton Rous who discovered the Rous sarcoma virus. Bacteria – some bacteria such as Helicobacter pylori cause inflammation during which oxidative species are produced, causing DNA damage and reducing efficiency of DNA repair systems, thereby increasing mutation. Protection Antioxidants are an important group of anticarcinogenic compounds that may help remove ROS or potentially harmful chemicals. These may be found naturally in fruits and vegetables. Examples of antioxidants are vitamin A and its carotenoid precursors, vitamin C, vitamin E, polyphenols, and various other compounds. β-Carotene is the red-orange colored compounds found in vegetables like carrots and tomatoes. Vitamin C may prevent some cancers by inhibiting the formation of mutagenic N-nitroso compounds (nitrosamine). Flavonoids, such as EGCG in green tea, have also been shown to be effective antioxidants and may have anti-cancer properties. Epidemiological studies indicate that a diet rich in fruits and vegetables is associated with lower incidence of some cancers and longer life expectancy, however, the effectiveness of antioxidant supplements in cancer prevention in general is still the subject of some debate. Other chemicals may reduce mutagenesis or prevent cancer via other mechanisms, although for some the precise mechanism for their protective property may not be certain. Selenium, which is present as a micronutrient in vegetables, is a component of important antioxidant enzymes such as gluthathione peroxidase. Many phytonutrients may counter the effect of mutagens; for example, sulforaphane in vegetables such as broccoli has been shown to be protective against prostate cancer. Others that may be effective against cancer include indole-3-carbinol from cruciferous vegetables and resveratrol from red wine. An effective precautionary measure an individual can undertake to protect themselves is by limiting exposure to mutagens such as UV radiations and tobacco smoke. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years. In 1981, human epidemiological analysis by Richard Doll and Richard Peto indicated that smoking caused 30% of cancers in the US. Diet is also thought to cause a significant number of cancer fatalities, and it has been estimated that around 32% of cancer deaths may be avoidable by modification to the diet. Mutagens identified in food include mycotoxins from food contaminated with fungal growths, such as aflatoxins which may be present in contaminated peanuts and corn; heterocyclic amines generated in meat when cooked at high temperature; PAHs in charred meat and smoked fish, as well as in oils, fats, bread, and cereal; and nitrosamines generated from nitrites used as food preservatives in cured meat such as bacon (ascorbate, which is added to cured meat, however, reduces nitrosamine formation). Overly-browned starchy food such as bread, biscuits and potatoes can generate acrylamide, a chemical shown to cause cancer in animal studies. Excessive alcohol consumption has also been linked to cancer; the possible mechanisms for its carcinogenicity include formation of the possible mutagen acetaldehyde, and the induction of the cytochrome P450 system which is known to produce mutagenic compounds from promutagens. For certain mutagens, such as dangerous chemicals and radioactive materials, as well as infectious agents known to cause cancer, government legislations and regulatory bodies are necessary for their control. Test systems Many different systems for detecting mutagen have been developed. Animal systems may more accurately reflect the metabolism of human, however, they are expensive and time-consuming (may take around three years to complete), they are therefore not used as a first screen for mutagenicity or carcinogenicity. Bacterial Ames test – This is the most commonly used test, and Salmonella typhimurium strains deficient in histidine biosynthesis are used in this test. The test checks for mutants that can revert to wild-type. It is an easy, inexpensive and convenient initial screen for mutagens. Resistance to 8-azaguanine in S. typhimurium – Similar to Ames test, but instead of reverse mutation, it checks for forward mutation that confer resistance to 8-Azaguanine in a histidine revertant strain. Escherichia coli systems – Both forward and reverse mutation detection system have been modified for use in E. coli. Tryptophan-deficient mutant is used for the reverse mutation, while galactose utility or resistance to 5-methyltryptophan may be used for forward mutation. DNA repair – E. coli and Bacillus subtilis strains deficient in DNA repair may be used to detect mutagens by their effect on the growth of these cells through DNA damage. Yeast Systems similar to Ames test have been developed in yeast. Saccharomyces cerevisiae is generally used. These systems can check for forward and reverse mutations, as well as recombinant events. Drosophila Sex-Linked Recessive Lethal Test – Males from a strain with yellow bodies are used in this test. The gene for the yellow body lies on the X-chromosome. The fruit flies are fed on a diet of test chemical, and progenies are separated by sex. The surviving males are crossed with the females of the same generation, and if no males with yellow bodies are detected in the second generation, it would indicate a lethal mutation on the X-chromosome has occurred. Plant assays Plants such as Zea mays, Arabidopsis thaliana and Tradescantia have been used in various test assays for mutagenecity of chemicals. Cell culture assay Mammalian cell lines such as Chinese hamster V79 cells, Chinese hamster ovary (CHO) cells or mouse lymphoma cells may be used to test for mutagenesis. Such systems include the HPRT assay for resistance to 8-azaguanine or 6-thioguanine, and ouabain-resistance (OUA) assay. Rat primary hepatocytes may also be used to measure DNA repair following DNA damage. Mutagens may stimulate unscheduled DNA synthesis that results in more stained nuclear material in cells following exposure to mutagens. Chromosome check systems These systems check for large scale changes to the chromosomes and may be used with cell culture or in animal test. The chromosomes are stained and observed for any changes. Sister chromatid exchange is a symmetrical exchange of chromosome material between sister chromatids and may be correlated to the mutagenic or carcinogenic potential of a chemical. In micronucleus Test, cells are examined for micronuclei, which are fragments or chromosomes left behind at anaphase, and is therefore a test for clastogenic agents that cause chromosome breakages. Other tests may check for various chromosomal aberrations such as chromatid and chromosomal gaps and deletions, translocations, and ploidy. Animal test systems Rodents are usually used in animal test. The chemicals under test are usually administered in the food and in the drinking water, but sometimes by dermal application, by gavage, or by inhalation, and carried out over the major part of the life span for rodents. In tests that check for carcinogens, maximum tolerated dosage is first determined, then a range of doses are given to around 50 animals throughout the notional lifespan of the animal of two years. After death the animals are examined for sign of tumours. Differences in metabolism between rat and human however means that human may not respond in exactly the same way to mutagen, and dosages that produce tumours on the animal test may also be unreasonably high for a human, i.e. the equivalent amount required to produce tumours in human may far exceed what a person might encounter in real life. Mice with recessive mutations for a visible phenotype may also be used to check for mutagens. Females with recessive mutation crossed with wild-type males would yield the same phenotype as the wild-type, and any observable change to the phenotype would indicate that a mutation induced by the mutagen has occurred. Mice may also be used for dominant lethal assays where early embryonic deaths are monitored. Male mice are treated with chemicals under test, mated with females, and the females are then sacrificed before parturition and early fetal deaths are counted in the uterine horns. Transgenic mouse assay using a mouse strain infected with a viral shuttle vector is another method for testing mutagens. Animals are first treated with suspected mutagen, the mouse DNA is then isolated and the phage segment recovered and used to infect E. coli. Using similar method as the blue-white screen, the plaque formed with DNA containing mutation are white, while those without are blue. In anti-cancer therapy Many mutagens are highly toxic to proliferating cells, and they are often used to destroy cancer cells. Alkylating agents such as cyclophosphamide and cisplatin, as well as intercalating agent such as daunorubicin and doxorubicin may be used in chemotherapy. However, due to their effect on other cells which are also rapidly dividing, they may have side effects such as hair loss and nausea. Research on better targeted therapies may reduce such side-effects. Ionizing radiations are used in radiation therapy. In fiction In science fiction, mutagens are often represented as substances that are capable of completely changing the form of the recipient or granting them superpowers. Powerful radiations are the agents of mutation for the superheroes in Marvel Comics's Fantastic Four, Daredevil, and Hulk, while in the Ninja Turtles franchise the MUTAGEN "ooze" for Inhumans the mutagen is the Terrigen Mist. Mutagens are also featured in video games such as Cyberia, System Shock, The Witcher, Metroid Prime: Trilogy, Resistance: Fall of Man, Resident Evil, Infamous, Freedom Force, Command & Conquer, Gears of War 3, StarCraft, BioShock, Fallout, Underrail, and Maneater. In the "nuclear monster" films of the 1950s, nuclear radiation mutates humans and common insects often to enormous size and aggression; these films include Godzilla, Them!, Attack of the 50 Foot Woman, Tarantula!, and The Amazing Colossal Man.
Biology and health sciences
Genetics
Biology
20369
https://en.wikipedia.org/wiki/Mitosis
Mitosis
Mitosis () is a part of the cell cycle in which replicated chromosomes are separated into two new nuclei. Cell division by mitosis is an equational division which gives rise to genetically identical cells in which the total number of chromosomes is maintained. Mitosis is preceded by the S phase of interphase (during which DNA replication occurs) and is followed by telophase and cytokinesis, which divide the cytoplasm, organelles, and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis altogether define the mitotic phase (M phase) of a cell cycle—the division of the mother cell into two daughter cells genetically identical to each other. The process of mitosis is divided into stages corresponding to the completion of one set of activities and the start of the next. These stages are preprophase (specific to plant cells), prophase, prometaphase, metaphase, anaphase, and telophase. During mitosis, the chromosomes, which have already duplicated during interphase, condense and attach to spindle fibers that pull one copy of each chromosome to opposite sides of the cell. The result is two genetically identical daughter nuclei. The rest of the cell may then continue to divide by cytokinesis to produce two daughter cells. The different phases of mitosis can be visualized in real time, using live cell imaging. An error in mitosis can result in the production of three or more daughter cells instead of the normal two. This is called tripolar mitosis and multipolar mitosis, respectively. These errors can be the cause of non-viable embryos that fail to implant. Other errors during mitosis can induce mitotic catastrophe, apoptosis (programmed cell death) or cause mutations. Certain types of cancers can arise from such mutations. Mitosis occurs only in eukaryotic cells and varies between organisms. For example, animal cells generally undergo an open mitosis, where the nuclear envelope breaks down before the chromosomes separate, whereas fungal cells generally undergo a closed mitosis, where chromosomes divide within an intact cell nucleus. Most animal cells undergo a shape change, known as mitotic cell rounding, to adopt a near spherical morphology at the start of mitosis. Most human cells are produced by mitotic cell division. Important exceptions include the gametes – sperm and egg cells – which are produced by meiosis. Prokaryotes, bacteria and archaea which lack a true nucleus, divide by a different process called binary fission. Discovery Numerous descriptions of cell division were made during 18th and 19th centuries, with various degrees of accuracy. In 1835, the German botanist Hugo von Mohl, described cell division in the green algae Cladophora glomerata, stating that multiplication of cells occurs through cell division. In 1838, Matthias Jakob Schleiden affirmed that "formation of new cells in their interior was a general rule for cell multiplication in plants", a view later rejected in favour of Mohl's model, due to contributions of Robert Remak and others. In animal cells, cell division with mitosis was discovered in frog, rabbit, and cat cornea cells in 1873 and described for the first time by the Polish histologist Wacław Mayzel in 1875. Bütschli, Schneider and Fol might have also claimed the discovery of the process presently known as "mitosis". In 1873, the German zoologist Otto Bütschli published data from observations on nematodes. A few years later, he discovered and described mitosis based on those observations. The term "mitosis", coined by Walther Flemming in 1882, is derived from the Greek word μίτος (mitos, "warp thread"). There are some alternative names for the process, e.g., "karyokinesis" (nuclear division), a term introduced by Schleicher in 1878, or "equational division", proposed by August Weismann in 1887. However, the term "mitosis" is also used in a broad sense by some authors to refer to karyokinesis and cytokinesis together. Presently, "equational division" is more commonly used to refer to meiosis II, the part of meiosis most like mitosis. Phases Overview The primary result of mitosis and cytokinesis is the transfer of a parent cell's genome into two daughter cells. The genome is composed of a number of chromosomes—complexes of tightly coiled DNA that contain genetic information vital for proper cell function. Because each resultant daughter cell should be genetically identical to the parent cell, the parent cell must make a copy of each chromosome before mitosis. This occurs during the S phase of interphase. Chromosome duplication results in two identical sister chromatids bound together by cohesin proteins at the centromere. When mitosis begins, the chromosomes condense and become visible. In some eukaryotes, for example animals, the nuclear envelope, which segregates the DNA from the cytoplasm, disintegrates into small vesicles. The nucleolus, which makes ribosomes in the cell, also disappears. Microtubules project from opposite ends of the cell, attach to the centromeres, and align the chromosomes centrally within the cell. The microtubules then contract to pull the sister chromatids of each chromosome apart. Sister chromatids at this point are called daughter chromosomes. As the cell elongates, corresponding daughter chromosomes are pulled toward opposite ends of the cell and condense maximally in late anaphase. A new nuclear envelope forms around each set of daughter chromosomes, which decondense to form interphase nuclei. During mitotic progression, typically after the anaphase onset, the cell may undergo cytokinesis. In animal cells, a cell membrane pinches inward between the two developing nuclei to produce two new cells. In plant cells, a cell plate forms between the two nuclei. Cytokinesis does not always occur; coenocytic (a type of multinucleate condition) cells undergo mitosis without cytokinesis. Interphase The interphase is a much longer phase of the cell cycle than the relatively short M phase. During interphase the cell prepares itself for the process of cell division. Interphase is divided into three subphases: G1 (first gap), S (synthesis), and G2 (second gap). During all three parts of interphase, the cell grows by producing proteins and cytoplasmic organelles. However, chromosomes are replicated only during the S phase. Thus, a cell grows (G1), continues to grow as it duplicates its chromosomes (S), grows more and prepares for mitosis (G2), and finally divides (M) before restarting the cycle. All these phases in the cell cycle are highly regulated by cyclins, cyclin-dependent kinases, and other cell cycle proteins. The phases follow one another in strict order and there are cell cycle checkpoints that give the cell cues to proceed or not, from one phase to another. Cells may also temporarily or permanently leave the cell cycle and enter G0 phase to stop dividing. This can occur when cells become overcrowded (density-dependent inhibition) or when they differentiate to carry out specific functions for the organism, as is the case for human heart muscle cells and neurons. Some G0 cells have the ability to re-enter the cell cycle. DNA double-strand breaks can be repaired during interphase by two principal processes. The first process, non-homologous end joining (NHEJ), can join the two broken ends of DNA in the G1, S and G2 phases of interphase. The second process, homologous recombinational repair (HRR), is more accurate than NHEJ in repairing double-strand breaks. HRR is active during the S and G2 phases of interphase when DNA replication is either partially accomplished or after it is completed, since HRR requires two adjacent homologs. Interphase helps prepare the cell for mitotic division. It dictates whether the mitotic cell division will occur. It carefully stops the cell from proceeding whenever the cell's DNA is damaged or has not completed an important phase. The interphase is very important as it will determine if mitosis completes successfully. It will reduce the amount of damaged cells produced and the production of cancerous cells. A miscalculation by the key Interphase proteins could be crucial as the latter could potentially create cancerous cells. Mitosis Preprophase (plant cells) In plant cells only, prophase is preceded by a preprophase stage. In highly vacuolated plant cells, the nucleus has to migrate into the center of the cell before mitosis can begin. This is achieved through the formation of a phragmosome, a transverse sheet of cytoplasm that bisects the cell along the future plane of cell division. In addition to phragmosome formation, preprophase is characterized by the formation of a ring of microtubules and actin filaments (called preprophase band) underneath the plasma membrane around the equatorial plane of the future mitotic spindle. This band marks the position where the cell will eventually divide. The cells of higher plants (such as the flowering plants) lack centrioles; instead, microtubules form a spindle on the surface of the nucleus and are then organized into a spindle by the chromosomes themselves, after the nuclear envelope breaks down. The preprophase band disappears during nuclear envelope breakdown and spindle formation in prometaphase. Prophase During prophase, which occurs after G2 interphase, the cell prepares to divide by tightly condensing its chromosomes and initiating mitotic spindle formation. During interphase, the genetic material in the nucleus consists of loosely packed chromatin. At the onset of prophase, chromatin fibers condense into discrete chromosomes that are typically visible at high magnification through a light microscope. In this stage, chromosomes are long, thin, and thread-like. Each chromosome has two chromatids. The two chromatids are joined at the centromere. Gene transcription ceases during prophase and does not resume until late anaphase to early G1 phase. The nucleolus also disappears during early prophase. Close to the nucleus of an animal cell are structures called centrosomes, consisting of a pair of centrioles surrounded by a loose collection of proteins. The centrosome is the coordinating center for the cell's microtubules. A cell inherits a single centrosome at cell division, which is duplicated by the cell before a new round of mitosis begins, giving a pair of centrosomes. The two centrosomes polymerize tubulin to help form a microtubule spindle apparatus. Motor proteins then push the centrosomes along these microtubules to opposite sides of the cell. Although centrosomes help organize microtubule assembly, they are not essential for the formation of the spindle apparatus, since they are absent from plants, and are not absolutely required for animal cell mitosis. Prometaphase At the beginning of prometaphase in animal cells, phosphorylation of nuclear lamins causes the nuclear envelope to disintegrate into small membrane vesicles. As this happens, microtubules invade the nuclear space. This is called open mitosis, and it occurs in some multicellular organisms. Fungi and some protists, such as algae or trichomonads, undergo a variation called closed mitosis where the spindle forms inside the nucleus, or the microtubules penetrate the intact nuclear envelope. In late prometaphase, kinetochore microtubules begin to search for and attach to chromosomal kinetochores. A kinetochore is a proteinaceous microtubule-binding structure that forms on the chromosomal centromere during late prophase. A number of polar microtubules find and interact with corresponding polar microtubules from the opposite centrosome to form the mitotic spindle. Although the kinetochore structure and function are not fully understood, it is known that it contains some form of molecular motor. When a microtubule connects with the kinetochore, the motor activates, using energy from ATP to "crawl" up the tube toward the originating centrosome. This motor activity, coupled with polymerisation and depolymerisation of microtubules, provides the pulling force necessary to later separate the chromosome's two chromatids. Metaphase After the microtubules have located and attached to the kinetochores in prometaphase, the two centrosomes begin pulling the chromosomes towards opposite ends of the cell. The resulting tension causes the chromosomes to align along the metaphase plate at the equatorial plane, an imaginary line that is centrally located between the two centrosomes (at approximately the midline of the cell). To ensure equitable distribution of chromosomes at the end of mitosis, the metaphase checkpoint guarantees that kinetochores are properly attached to the mitotic spindle and that the chromosomes are aligned along the metaphase plate. If the cell successfully passes through the metaphase checkpoint, it proceeds to anaphase. Anaphase During anaphase A, the cohesins that bind sister chromatids together are cleaved, forming two identical daughter chromosomes. Shortening of the kinetochore microtubules pulls the newly formed daughter chromosomes to opposite ends of the cell. During anaphase B, polar microtubules push against each other, causing the cell to elongate. In late anaphase, chromosomes also reach their overall maximal condensation level, to help chromosome segregation and the re-formation of the nucleus. In most animal cells, anaphase A precedes anaphase B, but some vertebrate egg cells demonstrate the opposite order of events. Telophase Telophase (from the Greek word τελος meaning "end") is a reversal of prophase and prometaphase events. At telophase, the polar microtubules continue to lengthen, elongating the cell even more. If the nuclear envelope has broken down, a new nuclear envelope forms using the membrane vesicles of the parent cell's old nuclear envelope. The new envelope forms around each set of separated daughter chromosomes (though the membrane does not enclose the centrosomes) and the nucleolus reappears. Both sets of chromosomes, now surrounded by new nuclear membrane, begin to "relax" or decondense. Mitosis is complete. Each daughter nucleus has an identical set of chromosomes. Cell division may or may not occur at this time depending on the organism. Cytokinesis Cytokinesis is not a phase of mitosis, but rather a separate process necessary for completing cell division. In animal cells, a cleavage furrow (pinch) containing a contractile ring, develops where the metaphase plate used to be, pinching off the separated nuclei. In both animal and plant cells, cell division is also driven by vesicles derived from the Golgi apparatus, which move along microtubules to the middle of the cell. In plants, this structure coalesces into a cell plate at the center of the phragmoplast and develops into a cell wall, separating the two nuclei. The phragmoplast is a microtubule structure typical for higher plants, whereas some green algae use a phycoplast microtubule array during cytokinesis. Each daughter cell has a complete copy of the genome of its parent cell. The end of cytokinesis marks the end of the M-phase. There are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei. The most notable occurrence of this is among the fungi, slime molds, and coenocytic algae, but the phenomenon is found in various other organisms. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Function The function or significance of mitosis, is the maintenance of the chromosomal set; each formed cell receives chromosomes that are alike in composition and equal in number to the chromosomes of the parent cell. Mitosis occurs in the following circumstances: Development and growth: The number of cells within an organism increases by mitosis. This is the basis of the development of a multicellular body from a single cell, i.e., zygote and also the basis of the growth of a multicellular body. Cell replacement: In some parts of the body, e.g. skin and digestive tract, cells are constantly sloughed off and replaced by new ones. New cells are formed by mitosis and so are exact copies of the cells being replaced. In like manner, red blood cells have a short lifespan (only about 3 months) and new RBCs are formed by mitosis. Regeneration: Some organisms can regenerate body parts. The production of new cells in such instances is achieved by mitosis. For example, starfish regenerate lost arms through mitosis. Asexual reproduction: Some organisms produce genetically similar offspring through asexual reproduction. For example, the hydra reproduces asexually by budding. The cells at the surface of hydra undergo mitosis and form a mass called a bud. Mitosis continues in the cells of the bud and this grows into a new individual. The same division happens during asexual reproduction or vegetative propagation in plants. Variations Forms of mitosis The mitosis process in the cells of eukaryotic organisms follows a similar pattern, but with variations in three main details. "Closed" and "open" mitosis can be distinguished on the basis of nuclear envelope remaining intact or breaking down. An intermediate form with partial degradation of the nuclear envelope is called "semiopen" mitosis. With respect to the symmetry of the spindle apparatus during metaphase, an approximately axially symmetric (centered) shape is called "orthomitosis", distinguished from the eccentric spindles of "pleuromitosis", in which mitotic apparatus has bilateral symmetry. Finally, a third criterion is the location of the central spindle in case of closed pleuromitosis: "extranuclear" (spindle located in the cytoplasm) or "intranuclear" (in the nucleus). Nuclear division takes place only in cells of organisms of the eukaryotic domain, as bacteria and archaea have no nucleus. Bacteria and archaea undergo a different type of division. Within each of the eukaryotic supergroups, mitosis of the open form can be found, as well as closed mitosis, except for unicellular Excavata, which show exclusively closed mitosis. Following, the occurrence of the forms of mitosis in eukaryotes: Closed intranuclear pleuromitosis is typical of Foraminifera, some Prasinomonadida, some Kinetoplastida, the Oxymonadida, the Haplosporidia, many fungi (chytrids, oomycetes, zygomycetes, ascomycetes), and some Radiolaria (Spumellaria and Acantharia); it seems to be the most primitive type. Closed extranuclear pleuromitosis occurs in Trichomonadida and Dinoflagellata. Closed orthomitosis is found among diatoms, ciliates, some Microsporidia, unicellular yeasts and some multicellular fungi. Semiopen pleuromitosis is typical of most Apicomplexa. Semiopen orthomitosis occurs with different variants in some amoebae (Lobosa) and some green flagellates (e.g., Raphidophyta or Volvox). Open orthomitosis is typical in mammals and other Metazoa, and in land plants; but it also occurs in some protists. Errors and other variations Errors can occur during mitosis, especially during early embryonic development in humans. During each step of mitosis, there are normally checkpoints as well that control the normal outcome of mitosis. But, occasionally to almost rarely, mistakes will happen. Mitotic errors can create aneuploid cells that have too few or too many of one or more chromosomes, a condition associated with cancer. Early human embryos, cancer cells, infected or intoxicated cells can also suffer from pathological division into three or more daughter cells (tripolar or multipolar mitosis), resulting in severe errors in their chromosomal complements. In nondisjunction, sister chromatids fail to separate during anaphase. One daughter cell receives both sister chromatids from the nondisjoining chromosome and the other cell receives none. As a result, the former cell gets three copies of the chromosome, a condition known as trisomy, and the latter will have only one copy, a condition known as monosomy. On occasion, when cells experience nondisjunction, they fail to complete cytokinesis and retain both nuclei in one cell, resulting in binucleated cells. Anaphase lag occurs when the movement of one chromatid is impeded during anaphase. This may be caused by a failure of the mitotic spindle to properly attach to the chromosome. The lagging chromatid is excluded from both nuclei and is lost. Therefore, one of the daughter cells will be monosomic for that chromosome. Endoreduplication (or endoreplication) occurs when chromosomes duplicate but the cell does not subsequently divide. This results in polyploid cells or, if the chromosomes duplicates repeatedly, polytene chromosomes. Endoreduplication is found in many species and appears to be a normal part of development. Endomitosis is a variant of endoreduplication in which cells replicate their chromosomes during S phase and enter, but prematurely terminate, mitosis. Instead of being divided into two new daughter nuclei, the replicated chromosomes are retained within the original nucleus. The cells then re-enter G1 and S phase and replicate their chromosomes again. This may occur multiple times, increasing the chromosome number with each round of replication and endomitosis. Platelet-producing megakaryocytes go through endomitosis during cell differentiation. Amitosis in ciliates and in animal placental tissues results in a random distribution of parental alleles. Karyokinesis without cytokinesis originates multinucleated cells called coenocytes. Diagnostic marker In histopathology, the mitosis rate (mitotic count or mitotic index) is an important parameter in various types of tissue samples, for diagnosis as well as to further specify the aggressiveness of tumors. For example, there is routinely a quantification of mitotic count in breast cancer classification. The mitoses must be counted in an area of the highest mitotic activity. Visually identifying these areas, is difficult in tumors with very high mitotic activity. Also, the detection of atypical forms of mitosis can be used both as a diagnostic and prognostic marker. For example, lag-type mitosis (non-attached condensed chromatin in the area of the mitotic figure) indicates high risk human papillomavirus infection-related Cervical cancer. In order to improve the reproducibility and accuracy of the mitotic count, automated image analysis using deep learning-based algorithms have been proposed. However, further research is needed before those algorithms can be used to routine diagnostics. Related cell processes Cell rounding In animal tissue, most cells round up to a near-spherical shape during mitosis. In epithelia and epidermis, an efficient rounding process is correlated with proper mitotic spindle alignment and subsequent correct positioning of daughter cells. Moreover, researchers have found that if rounding is heavily suppressed it may result in spindle defects, primarily pole splitting and failure to efficiently capture chromosomes. Therefore, mitotic cell rounding is thought to play a protective role in ensuring accurate mitosis. Rounding forces are driven by reorganization of F-actin and myosin (actomyosin) into a contractile homogeneous cell cortex that 1) rigidifies the cell periphery and 2) facilitates generation of intracellular hydrostatic pressure (up to 10 fold higher than interphase). The generation of intracellular pressure is particularly critical under confinement, such as would be important in a tissue scenario, where outward forces must be produced to round up against surrounding cells and/or the extracellular matrix. Generation of pressure is dependent on formin-mediated F-actin nucleation and Rho kinase (ROCK)-mediated myosin II contraction, both of which are governed upstream by signaling pathways RhoA and ECT2 through the activity of Cdk1. Due to its importance in mitosis, the molecular components and dynamics of the mitotic actomyosin cortex is an area of active research. Mitotic recombination Mitotic cells irradiated with X-rays in the G1 phase of the cell cycle repair recombinogenic DNA damages primarily by recombination between homologous chromosomes. Mitotic cells irradiated in the G2 phase repair such damages preferentially by sister-chromatid recombination. Mutations in genes encoding enzymes employed in recombination cause cells to have increased sensitivity to being killed by a variety of DNA damaging agents. These findings suggest that mitotic recombination is an adaptation for repairing DNA damages including those that are potentially lethal. Evolution There are prokaryotic homologs of all the key molecules of eukaryotic mitosis (e.g., actins, tubulins). Being a universal eukaryotic property, mitosis probably arose at the base of the eukaryotic tree. As mitosis is less complex than meiosis, meiosis may have arisen after mitosis. However, sexual reproduction involving meiosis is also a primitive characteristic of eukaryotes. Thus meiosis and mitosis may both have evolved, in parallel, from ancestral prokaryotic processes. While in bacterial cell division, after duplication of DNA, two circular chromosomes are attached to a special region of the cell membrane, eukaryotic mitosis is usually characterized by the presence of many linear chromosomes, whose kinetochores attaches to the microtubules of the spindle. In relation to the forms of mitosis, closed intranuclear pleuromitosis seems to be the most primitive type, as it is more similar to bacterial division. Gallery Mitotic cells can be visualized microscopically by staining them with fluorescent antibodies and dyes.
Biology and health sciences
Cellular division
null
20374
https://en.wikipedia.org/wiki/Metabolism
Metabolism
Metabolism (, from metabolē, "change") is the set of life-sustaining chemical reactions in organisms. The three main functions of metabolism are: the conversion of the energy in food to energy available to run cellular processes; the conversion of food to building blocks of proteins, lipids, nucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transportation of substances into and between different cells, in which case the above described set of reactions within the cells is called intermediary (or intermediate) metabolism. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy. The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly—and they also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells. The metabolic system of a particular organism determines which substances it will find nutritious and which poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals. The basal metabolic rate of an organism is the measure of the amount of energy consumed by all of these chemical reactions. A striking feature of metabolism is the similarity of the basic metabolic pathways among vastly different species. For example, the set of carboxylic acids that are best known as the intermediates in the citric acid cycle are present in all known organisms, being found in species as diverse as the unicellular bacterium Escherichia coli and huge multicellular organisms like elephants. These similarities in metabolic pathways are likely due to their early appearance in evolutionary history, and their retention is likely due to their efficacy. In various diseases, such as type II diabetes, metabolic syndrome, and cancer, normal metabolism is disrupted. The metabolism of cancer cells is also different from the metabolism of normal cells, and these differences can be used to find targets for therapeutic intervention in cancer. Key biochemicals Most of the structures that make up animals, plants and microbes are made from four basic classes of molecules: amino acids, carbohydrates, nucleic acid and lipids (often called fats). As these molecules are vital for life, metabolic reactions either focus on making these molecules during the construction of cells and tissues, or on breaking them down and using them to obtain energy, by their digestion. These biochemicals can be joined to make polymers such as DNA and proteins, essential macromolecules of life. Amino acids and proteins Proteins are made of amino acids arranged in a linear chain joined by peptide bonds. Many proteins are enzymes that catalyze the chemical reactions in metabolism. Other proteins have structural or mechanical functions, such as those that form the cytoskeleton, a system of scaffolding that maintains the cell shape. Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle. Amino acids also contribute to cellular energy metabolism by providing a carbon source for entry into the citric acid cycle (tricarboxylic acid cycle), especially when a primary source of energy, such as glucose, is scarce, or when cells undergo metabolic stress. Lipids Lipids are the most diverse group of biochemicals. Their main structural uses are as part of internal and external biological membranes, such as the cell membrane. Their chemical energy can also be used. Lipids contain a long, non-polar hydrocarbon chain with a small polar region containing oxygen. Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids. Carbohydrates Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways. Nucleotides The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base. Nucleic acids are critical for the storage and use of genetic information, and its interpretation through the processes of transcription and protein biosynthesis. This information is protected by DNA repair mechanisms and propagated through DNA replication. Many viruses have an RNA genome, such as HIV, which uses reverse transcription to create a DNA template from its viral RNA genome. RNA in ribozymes such as spliceosomes and ribosomes is similar to enzymes as it can catalyze chemical reactions. Individual nucleosides are made by attaching a nucleobase to a ribose sugar. These bases are heterocyclic rings containing nitrogen, classified as purines or pyrimidines. Nucleotides also act as coenzymes in metabolic-group-transfer reactions. Coenzymes Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups of atoms and their bonds within molecules. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are called coenzymes. Each class of group-transfer reactions is carried out by a particular coenzyme, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. These coenzymes are therefore continuously made, consumed and then recycled. One central coenzyme is adenosine triphosphate (ATP), the energy currency of cells. This nucleotide is used to transfer chemical energy between different chemical reactions. There is only a small amount of ATP in cells, but as it is continuously regenerated, the human body can use about its own weight in ATP per day. ATP acts as a bridge between catabolism and anabolism. Catabolism breaks down molecules, and anabolism puts them together. Catabolic reactions generate ATP, and anabolic reactions consume it. It also serves as a carrier of phosphate groups in phosphorylation reactions. A vitamin is an organic compound needed in small quantities that cannot be made in cells. In human nutrition, most vitamins function as coenzymes after modification; for example, all water-soluble vitamins are phosphorylated or are coupled to nucleotides when they are used in cells. Nicotinamide adenine dinucleotide (NAD+), a derivative of vitamin B3 (niacin), is an important coenzyme that acts as a hydrogen acceptor. Hundreds of separate types of dehydrogenases remove electrons from their substrates and reduce NAD+ into NADH. This reduced form of the coenzyme is then a substrate for any of the reductases in the cell that need to transfer hydrogen atoms to their substrates. Nicotinamide adenine dinucleotide exists in two related forms in the cell, NADH and NADPH. The NAD+/NADH form is more important in catabolic reactions, while NADP+/NADPH is used in anabolic reactions. Mineral and cofactors Inorganic elements play critical roles in metabolism; some are abundant (e.g. sodium and potassium) while others function at minute concentrations. About 99% of a human's body weight is made up of the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. Organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen; most of the oxygen and hydrogen is present as water. The abundant inorganic elements act as electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate and the organic ion bicarbonate. The maintenance of precise ion gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerve and muscle function, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cell's fluid, the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules. Transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant of those. Metal cofactors are bound tightly to specific sites in proteins; although enzyme cofactors can be modified during catalysis, they always return to their original state by the end of the reaction catalyzed. Metal micronutrients are taken up into organisms by specific transporters and bind to storage proteins such as ferritin or metallothionein when not in use. Catabolism Catabolism is the set of metabolic processes that break down large molecules. These include breaking down and oxidizing food molecules. The purpose of the catabolic reactions is to provide the energy and components needed by anabolic reactions which build molecules. The exact nature of these catabolic reactions differ from organism to organism, and organisms can be classified based on their sources of energy, hydrogen, and carbon (their primary nutritional groups), as shown in the table below. Organic molecules are used as a source of hydrogen atoms or electrons by organotrophs, while lithotrophs use inorganic substrates. Whereas phototrophs convert sunlight to chemical energy, chemotrophs depend on redox reactions that involve the transfer of electrons from reduced donor molecules such as organic molecules, hydrogen, hydrogen sulfide or ferrous ions to oxygen, nitrate or sulfate. In animals, these reactions involve complex organic molecules that are broken down to simpler molecules, such as carbon dioxide and water. Photosynthetic organisms, such as plants and cyanobacteria, use similar electron-transfer reactions to store energy absorbed from sunlight. The most common set of catabolic reactions in animals can be separated into three main stages. In the first stage, large organic molecules, such as proteins, polysaccharides or lipids, are digested into their smaller components outside cells. Next, these smaller molecules are taken up by cells and converted to smaller molecules, usually acetyl coenzyme A (acetyl-CoA), which releases some energy. Finally, the acetyl group on acetyl-CoA is oxidized to water and carbon dioxide in the citric acid cycle and electron transport chain, releasing more energy while reducing the coenzyme nicotinamide adenine dinucleotide (NAD+) into NADH. Digestion Macromolecules cannot be directly processed by cells. Macromolecules must be broken into smaller units before they can be used in cell metabolism. Different classes of enzymes are used to digest these polymers. These digestive enzymes include proteases that digest proteins into amino acids, as well as glycoside hydrolases that digest polysaccharides into simple sugars known as monosaccharides. Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and in salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins. Energy from organic compounds Carbohydrate catabolism is the breakdown of carbohydrates into smaller units. Carbohydrates are usually taken into cells after they have been digested into monosaccharides such as glucose and fructose. Once inside, the major route of breakdown is glycolysis, in which glucose is converted into pyruvate. This process generates the energy-conveying molecule NADH from NAD+, and generates ATP from ADP for use in powering many processes within the cell. Pyruvate is an intermediate in several metabolic pathways, but the majority is converted to acetyl-CoA and fed into the citric acid cycle, which enables more ATP production by means of oxidative phosphorylation. This oxidation consumes molecular oxygen and releases water and the waste product carbon dioxide. When oxygen is lacking, or when pyruvate is temporarily produced faster than it can be consumed by the citric acid cycle (as in intense muscular exertion), pyruvate is converted to lactate by the enzyme lactate dehydrogenase, a process that also oxidizes NADH back to NAD+ for re-use in further glycolysis, allowing energy production to continue. The lactate is later converted back to pyruvate for ATP production where energy is needed, or back to glucose in the Cori cycle. An alternative route for glucose breakdown is the pentose phosphate pathway, which produces less energy but supports anabolism (biomolecule synthesis). This pathway reduces the coenzyme NADP+ to NADPH and produces pentose compounds such as ribose 5-phosphate for synthesis of many biomolecules such as nucleotides and aromatic amino acids. Fats are catabolized by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. M. tuberculosis can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol-use pathway(s) have been validated as important during various stages of the infection lifecycle of M. tuberculosis. Amino acids are either used to synthesize proteins and other biomolecules, or oxidized to urea and carbon dioxide to produce energy. The oxidation pathway starts with the removal of the amino group by a transaminase. The amino group is fed into the urea cycle, leaving a deaminated carbon skeleton in the form of a keto acid. Several of these keto acids are intermediates in the citric acid cycle, for example α-ketoglutarate formed by deamination of glutamate. The glucogenic amino acids can also be converted into glucose, through gluconeogenesis. Energy transformations Oxidative phosphorylation In oxidative phosphorylation, the electrons removed from organic molecules in areas such as the citric acid cycle are transferred to oxygen and the energy released is used to make ATP. This is done in eukaryotes by a series of proteins in the membranes of mitochondria called the electron transport chain. In prokaryotes, these proteins are found in the cell's inner membrane. These proteins use the energy from reduced molecules like NADH to pump protons across a membrane. Pumping protons out of the mitochondria creates a proton concentration difference across the membrane and generates an electrochemical gradient. This force drives protons back into the mitochondrion through the base of an enzyme called ATP synthase. The flow of protons makes the stalk subunit rotate, causing the active site of the synthase domain to change shape and phosphorylate adenosine diphosphate—turning it into ATP. Energy from inorganic compounds Chemolithotrophy is a type of metabolism found in prokaryotes where energy is obtained from the oxidation of inorganic compounds. These organisms can use hydrogen, reduced sulfur compounds (such as sulfide, hydrogen sulfide and thiosulfate), ferrous iron (Fe(II)) or ammonia as sources of reducing power and they gain energy from the oxidation of these compounds. These microbial processes are important in global biogeochemical cycles such as acetogenesis, nitrification and denitrification and are critical for soil fertility. Energy from light The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can, however, operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds. In many organisms, the capture of solar energy is similar in principle to oxidative phosphorylation, as it involves the storage of energy as a proton concentration gradient. This proton motive force then drives ATP synthesis. The electrons needed to drive this electron transport chain come from light-gathering proteins called photosynthetic reaction centres. Reaction centers are classified into two types depending on the nature of photosynthetic pigment present, with most photosynthetic bacteria only having one type, while plants and cyanobacteria have two. In plants, algae, and cyanobacteria, photosystem II uses light energy to remove electrons from water, releasing oxygen as a waste product. The electrons then flow to the cytochrome b6f complex, which uses their energy to pump protons across the thylakoid membrane in the chloroplast. These protons move back through the membrane as they drive the ATP synthase, as before. The electrons then flow through photosystem I and can then be used to reduce the coenzyme NADP+. Anabolism Anabolism is the set of constructive metabolic processes where the energy released by catabolism is used to synthesize complex molecules. In general, the complex molecules that make up cellular structures are constructed step-by-step from smaller and simpler precursors. Anabolism involves three basic stages. First, the production of precursors such as amino acids, monosaccharides, isoprenoids and nucleotides, secondly, their activation into reactive forms using energy from ATP, and thirdly, the assembly of these precursors into complex molecules such as proteins, polysaccharides, lipids and nucleic acids. Anabolism in organisms can be different according to the source of constructed molecules in their cells. Autotrophs such as plants can construct the complex organic molecules in their cells such as polysaccharides and proteins from simple molecules like carbon dioxide and water. Heterotrophs, on the other hand, require a source of more complex substances, such as monosaccharides and amino acids, to produce these complex molecules. Organisms can be further classified by ultimate source of their energy: photoautotrophs and photoheterotrophs obtain energy from light, whereas chemoautotrophs and chemoheterotrophs obtain energy from oxidation reactions. Carbon fixation Photosynthesis is the synthesis of carbohydrates from sunlight and carbon dioxide (CO2). In plants, cyanobacteria and algae, oxygenic photosynthesis splits water, with oxygen produced as a waste product. This process uses the ATP and NADPH produced by the photosynthetic reaction centres, as described above, to convert CO2 into glycerate 3-phosphate, which can then be converted into glucose. This carbon-fixation reaction is carried out by the enzyme RuBisCO as part of the Calvin–Benson cycle. Three types of photosynthesis occur in plants, C3 carbon fixation, C4 carbon fixation and CAM photosynthesis. These differ by the route that carbon dioxide takes to the Calvin cycle, with C3 plants fixing CO2 directly, while C4 and CAM photosynthesis incorporate the CO2 into other compounds first, as adaptations to deal with intense sunlight and dry conditions. In photosynthetic prokaryotes the mechanisms of carbon fixation are more diverse. Here, carbon dioxide can be fixed by the Calvin–Benson cycle, a reversed citric acid cycle, or the carboxylation of acetyl-CoA. Prokaryotic chemoautotrophs also fix CO2 through the Calvin–Benson cycle, but use energy from inorganic compounds to drive the reaction. Carbohydrates and glycans In carbohydrate anabolism, simple organic acids can be converted into monosaccharides such as glucose and then used to assemble polysaccharides such as starch. The generation of glucose from compounds like pyruvate, lactate, glycerol, glycerate 3-phosphate and amino acids is called gluconeogenesis. Gluconeogenesis converts pyruvate to glucose-6-phosphate through a series of intermediates, many of which are shared with glycolysis. However, this pathway is not simply glycolysis run in reverse, as several steps are catalyzed by non-glycolytic enzymes. This is important as it allows the formation and breakdown of glucose to be regulated separately, and prevents both pathways from running simultaneously in a futile cycle. Although fat is a common way of storing energy, in vertebrates such as humans the fatty acids in these stores cannot be converted to glucose through gluconeogenesis as these organisms cannot convert acetyl-CoA into pyruvate; plants do, but animals do not, have the necessary enzymatic machinery. As a result, after long-term starvation, vertebrates need to produce ketone bodies from fatty acids to replace glucose in tissues such as the brain that cannot metabolize fatty acids. In other organisms such as plants and bacteria, this metabolic problem is solved using the glyoxylate cycle, which bypasses the decarboxylation step in the citric acid cycle and allows the transformation of acetyl-CoA to oxaloacetate, where it can be used for the production of glucose. Other than fat, glucose is stored in most tissues, as an energy resource available within the tissue through glycogenesis which was usually being used to maintained glucose level in blood. Polysaccharides and glycans are made by the sequential addition of monosaccharides by glycosyltransferase from a reactive sugar-phosphate donor such as uridine diphosphate glucose (UDP-Glc) to an acceptor hydroxyl group on the growing polysaccharide. As any of the hydroxyl groups on the ring of the substrate can be acceptors, the polysaccharides produced can have straight or branched structures. The polysaccharides produced can have structural or metabolic functions themselves, or be transferred to lipids and proteins by the enzymes oligosaccharyltransferases. Fatty acids, isoprenoids and sterol Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups: in animals and fungi, all these fatty acid synthase reactions are carried out by a single multifunctional type I protein, while in plant plastids and bacteria separate type II enzymes perform each step in the pathway. Terpenes and isoprenoids are a large class of lipids that include the carotenoids and form the largest class of plant natural products. These compounds are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is sterol biosynthesis. Here, the isoprene units are joined to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other sterols such as cholesterol and ergosterol. Proteins Organisms vary in their ability to synthesize the 20 common amino acids. Most bacteria and plants can synthesize all twenty, but mammals can only synthesize eleven nonessential amino acids, so nine essential amino acids must be obtained from food. Some simple parasites, such as the bacteria Mycoplasma pneumoniae, lack all amino acid synthesis and take their amino acids directly from their hosts. All amino acids are synthesized from intermediates in glycolysis, the citric acid cycle, or the pentose phosphate pathway. Nitrogen is provided by glutamate and glutamine. Nonessensial amino acid synthesis depends on the formation of the appropriate alpha-keto acid, which is then transaminated to form an amino acid. Amino acids are made into proteins by being joined in a chain of peptide bonds. Each different protein has a unique sequence of amino acid residues: this is its primary structure. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked in varying sequences to form a huge variety of proteins. Proteins are made from amino acids that have been activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA precursor is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which joins the amino acid onto the elongating protein chain, using the sequence information in a messenger RNA. Nucleotide synthesis and salvage Nucleotides are made from amino acids, carbon dioxide and formic acid in pathways that require large amounts of metabolic energy. Consequently, most organisms have efficient systems to salvage preformed nucleotides. Purines are synthesized as nucleosides (bases attached to ribose). Both adenine and guanine are made from the precursor nucleoside inosine monophosphate, which is synthesized using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as formate transferred from the coenzyme tetrahydrofolate. Pyrimidines, on the other hand, are synthesized from the base orotate, which is formed from glutamine and aspartate. Xenobiotics and redox metabolism All organisms are constantly exposed to compounds that they cannot use as foods and that would be harmful if they accumulated in cells, as they have no metabolic function. These potentially damaging compounds are called xenobiotics. Xenobiotics such as synthetic drugs, natural poisons and antibiotics are detoxified by a set of xenobiotic-metabolizing enzymes. In humans, these include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. This system of enzymes acts in three stages to firstly oxidize the xenobiotic (phase I) and then conjugate water-soluble groups onto the molecule (phase II). The modified water-soluble xenobiotic can then be pumped out of cells and in multicellular organisms may be further metabolized before being excreted (phase III). In ecology, these reactions are particularly important in microbial biodegradation of pollutants and the bioremediation of contaminated land and oil spills. Many of these microbial reactions are shared with multicellular organisms, but due to the incredible diversity of types of microbes these organisms are able to deal with a far wider range of xenobiotics than multicellular organisms, and can degrade even persistent organic pollutants such as organochloride compounds. A related problem for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione and enzymes such as catalases and peroxidases. Thermodynamics of living organisms Living organisms must obey the laws of thermodynamics, which describe the transfer of heat and work. The second law of thermodynamics states that in any isolated system, the amount of entropy (disorder) cannot decrease. Although living organisms' amazing complexity appears to contradict this law, life is possible as all organisms are open systems that exchange matter and energy with their surroundings. Living systems are not in equilibrium, but instead are dissipative systems that maintain their state of high complexity by causing a larger increase in the entropy of their environments. The metabolism of a cell achieves this by coupling the spontaneous processes of catabolism to the non-spontaneous processes of anabolism. In thermodynamic terms, metabolism maintains order by creating disorder. Regulation and control As the environments of most organisms are constantly changing, the reactions of metabolism must be finely regulated to maintain a constant set of conditions within cells, a condition called homeostasis. Metabolic regulation also allows organisms to respond to signals and interact actively with their environments. Two closely linked concepts are important for understanding how metabolic pathways are controlled. Firstly, the regulation of an enzyme in a pathway is how its activity is increased and decreased in response to signals. Secondly, the control exerted by this enzyme is the effect that these changes in its activity have on the overall rate of the pathway (the flux through the pathway). For example, an enzyme may show large changes in activity (i.e. it is highly regulated) but if these changes have little effect on the flux of a metabolic pathway, then this enzyme is not involved in the control of the pathway. There are multiple levels of metabolic regulation. In intrinsic regulation, the metabolic pathway self-regulates to respond to changes in the levels of substrates or products; for example, a decrease in the amount of product can increase the flux through the pathway to compensate. This type of regulation often involves allosteric regulation of the activities of multiple enzymes in the pathway. Extrinsic control involves a cell in a multicellular organism changing its metabolism in response to signals from other cells. These signals are usually in the form of water-soluble messengers such as hormones and growth factors and are detected by specific receptors on the cell surface. These signals are then transmitted inside the cell by second messenger systems that often involved the phosphorylation of proteins. A very well understood example of extrinsic control is the regulation of glucose metabolism by the hormone insulin. Insulin is produced in response to rises in blood glucose levels. Binding of the hormone to insulin receptors on cells then activates a cascade of protein kinases that cause the cells to take up glucose and convert it into storage molecules such as fatty acids and glycogen. The metabolism of glycogen is controlled by activity of phosphorylase, the enzyme that breaks down glycogen, and glycogen synthase, the enzyme that makes it. These enzymes are regulated in a reciprocal fashion, with phosphorylation inhibiting glycogen synthase, but activating phosphorylase. Insulin causes glycogen synthesis by activating protein phosphatases and producing a decrease in the phosphorylation of these enzymes. Evolution The central pathways of metabolism described above, such as glycolysis and the citric acid cycle, are present in all three domains of living things and were present in the last universal common ancestor. This universal ancestral cell was prokaryotic and probably a methanogen that had extensive amino acid, nucleotide, carbohydrate and lipid metabolism. The retention of these ancient pathways during later evolution may be the result of these reactions having been an optimal solution to their particular metabolic problems, with pathways such as glycolysis and the citric acid cycle producing their end products highly efficiently and in a minimal number of steps. The first pathways of enzyme-based metabolism may have been parts of purine nucleotide metabolism, while previous metabolic pathways were a part of the ancient RNA world. Many models have been proposed to describe the mechanisms by which novel metabolic pathways evolve. These include the sequential addition of novel enzymes to a short ancestral pathway, the duplication and then divergence of entire pathways as well as the recruitment of pre-existing enzymes and their assembly into a novel reaction pathway. The relative importance of these mechanisms is unclear, but genomic studies have shown that enzymes in a pathway are likely to have a shared ancestry, suggesting that many pathways have evolved in a step-by-step fashion with novel functions created from pre-existing steps in the pathway. An alternative model comes from studies that trace the evolution of proteins' structures in metabolic networks, this has suggested that enzymes are pervasively recruited, borrowing enzymes to perform similar functions in different metabolic pathways (evident in the MANET database) These recruitment processes result in an evolutionary enzymatic mosaic. A third possibility is that some parts of metabolism might exist as "modules" that can be reused in different pathways and perform similar functions on different molecules. As well as the evolution of new metabolic pathways, evolution can also cause the loss of metabolic functions. For example, in some parasites metabolic processes that are not essential for survival are lost and preformed amino acids, nucleotides and carbohydrates may instead be scavenged from the host. Similar reduced metabolic capabilities are seen in endosymbiotic organisms. Investigation and manipulation Classically, metabolism is studied by a reductionist approach that focuses on a single metabolic pathway. Particularly valuable is the use of radioactive tracers at the whole-organism, tissue and cellular levels, which define the paths from precursors to final products by identifying radioactively labelled intermediates and products. The enzymes that catalyze these chemical reactions can then be purified and their kinetics and responses to inhibitors investigated. A parallel approach is to identify the small molecules in a cell or tissue; the complete set of these molecules is called the metabolome. Overall, these studies give a good view of the structure and function of simple metabolic pathways, but are inadequate when applied to more complex systems such as the metabolism of a complete cell. An idea of the complexity of the metabolic networks in cells that contain thousands of different enzymes is given by the figure showing the interactions between just 43 proteins and 40 metabolites to the right: the sequences of genomes provide lists containing anything up to 26.500 genes. However, it is now possible to use this genomic data to reconstruct complete networks of biochemical reactions and produce more holistic mathematical models that may explain and predict their behavior. These models are especially powerful when used to integrate the pathway and metabolite data obtained through classical methods with data on gene expression from proteomic and DNA microarray studies. Using these techniques, a model of human metabolism has now been produced, which will guide future drug discovery and biochemical research. These models are now used in network analysis, to classify human diseases into groups that share common proteins or metabolites. Bacterial metabolic networks are a striking example of bow-tie organization, an architecture able to input a wide range of nutrients and produce a large variety of products and complex macromolecules using a relatively few intermediate common currencies. A major technological application of this information is metabolic engineering. Here, organisms such as yeast, plants or bacteria are genetically modified to make them more useful in biotechnology and aid the production of drugs such as antibiotics or industrial chemicals such as 1,3-propanediol and shikimic acid. These genetic modifications usually aim to reduce the amount of energy used to produce the product, increase yields and reduce the production of wastes. History The term metabolism is derived from the Ancient Greek word μεταβολή—"metabole" for "a change" which is derived from μεταβάλλειν—"metaballein", meaning "to change" Greek philosophy Aristotle's The Parts of Animals sets out enough details of his views on metabolism for an open flow model to be made. He believed that at each stage of the process, materials from food were transformed, with heat being released as the classical element of fire, and residual materials being excreted as urine, bile, or faeces. Ibn al-Nafis described metabolism in his 1260 AD work titled Al-Risalah al-Kamiliyyah fil Siera al-Nabawiyyah (The Treatise of Kamil on the Prophet's Biography) which included the following phrase "Both the body and its parts are in a continuous state of dissolution and nourishment, so they are inevitably undergoing permanent change." Application of the scientific method and Modern metabolic theories The history of the scientific study of metabolism spans several centuries and has moved from examining whole animals in early studies, to examining individual metabolic reactions in modern biochemistry. The first controlled experiments in human metabolism were published by Santorio Santorio in 1614 in his book Ars de statica medicina. He described how he weighed himself before and after eating, sleep, working, sex, fasting, drinking, and excreting. He found that most of the food he took in was lost through what he called "insensible perspiration". In these early studies, the mechanisms of these metabolic processes had not been identified and a vital force was thought to animate living tissue. In the 19th century, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that fermentation was catalyzed by substances within the yeast cells he called "ferments". He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." This discovery, along with the publication by Friedrich Wöhler in 1828 of a paper on the chemical synthesis of urea, and is notable for being the first organic compound prepared from wholly inorganic precursors. This proved that the organic compounds and chemical reactions found in cells were no different in principle than any other part of chemistry. It was the discovery of enzymes at the beginning of the 20th century by Eduard Buchner that separated the study of the chemical reactions of metabolism from the biological study of cells, and marked the beginnings of biochemistry. The mass of biochemical knowledge grew rapidly throughout the early 20th century. One of the most prolific of these modern biochemists was Hans Krebs who made huge contributions to the study of metabolism. He discovered the urea cycle and later, working with Hans Kornberg, the citric acid cycle and the glyoxylate cycle.
Biology and health sciences
Biology
null
20377
https://en.wikipedia.org/wiki/Microorganism
Microorganism
A microorganism, or microbe, is an organism of microscopic size, which may exist in its single-celled form or as a colony of cells. The possible existence of unseen microbial life was suspected from ancient times, such as in Jain scriptures from sixth century BC India. The scientific study of microorganisms began with their observation under the microscope in the 1670s by Anton van Leeuwenhoek. In the 1850s, Louis Pasteur found that microorganisms caused food spoilage, debunking the theory of spontaneous generation. In the 1880s, Robert Koch discovered that microorganisms caused the diseases tuberculosis, cholera, diphtheria, and anthrax. Because microorganisms include most unicellular organisms from all three domains of life, they can be extremely diverse. Two of the three domains, Archaea and Bacteria, only contain microorganisms. The third domain, Eukaryota, includes all multicellular organisms as well as many unicellular protists and protozoans that are microbes. Some protists are related to animals and some to green plants. Many multicellular organisms are also microscopic, namely micro-animals, some fungi, and some algae, but these are generally not considered microorganisms. Microorganisms can have very different habitats, and live everywhere from the poles to the equator, in deserts, geysers, rocks, and the deep sea. Some are adapted to extremes such as very hot or very cold conditions, others to high pressure, and a few, such as Deinococcus radiodurans, to high radiation environments. Microorganisms also make up the microbiota found in and on all multicellular organisms. There is evidence that 3.45-billion-year-old Australian rocks once contained microorganisms, the earliest direct evidence of life on Earth. Microbes are important in human culture and health in many ways, serving to ferment foods and treat sewage, and to produce fuel, enzymes, and other bioactive compounds. Microbes are essential tools in biology as model organisms and have been put to use in biological warfare and bioterrorism. Microbes are a vital component of fertile soil. In the human body, microorganisms make up the human microbiota, including the essential gut flora. The pathogens responsible for many infectious diseases are microbes and, as such, are the target of hygiene measures. Discovery Ancient precursors The possible existence of microscopic organisms was discussed for many centuries before their discovery in the seventeenth century. By the 6th century BC, the Jains of present-day India postulated the existence of tiny organisms called nigodas. These nigodas are said to be born in clusters; they live everywhere, including the bodies of plants, animals, and people; and their life lasts only for a fraction of a second. According to Mahavira, the 24th preacher of Jainism, the humans destroy these nigodas on a massive scale, when they eat, breathe, sit, and move. Many modern Jains assert that Mahavira's teachings presage the existence of microorganisms as discovered by modern science. The earliest known idea to indicate the possibility of diseases spreading by yet unseen organisms was that of the Roman scholar Marcus Terentius Varro in a first-century BC book entitled On Agriculture in which he called the unseen creatures animalia minuta, and warns against locating a homestead near a swamp: In The Canon of Medicine (1020), Avicenna suggested that tuberculosis and other diseases might be contagious. Early modern Turkish scientist Akshamsaddin mentioned the microbe in his work Maddat ul-Hayat (The Material of Life) about two centuries prior to Antonie van Leeuwenhoek's discovery through experimentation: In 1546, Girolamo Fracastoro proposed that epidemic diseases were caused by transferable seedlike entities that could transmit infection by direct or indirect contact, or even without contact over long distances. Antonie van Leeuwenhoek is considered to be one of the fathers of microbiology. He was the first in 1673 to discover and conduct scientific experiments with microorganisms, using simple single-lensed microscopes of his own design. Robert Hooke, a contemporary of Leeuwenhoek, also used microscopy to observe microbial life in the form of the fruiting bodies of moulds. In his 1665 book Micrographia, he made drawings of studies, and he coined the term cell. 19th century Louis Pasteur (1822–1895) exposed boiled broths to the air, in vessels that contained a filter to prevent particles from passing through to the growth medium, and also in vessels without a filter, but with air allowed in via a curved tube so dust particles would settle and not come in contact with the broth. By boiling the broth beforehand, Pasteur ensured that no microorganisms survived within the broths at the beginning of his experiment. Nothing grew in the broths in the course of Pasteur's experiment. This meant that the living organisms that grew in such broths came from outside, as spores on dust, rather than spontaneously generated within the broth. Thus, Pasteur refuted the theory of spontaneous generation and supported the germ theory of disease. In 1876, Robert Koch (1843–1910) established that microorganisms can cause disease. He found that the blood of cattle that were infected with anthrax always had large numbers of Bacillus anthracis. Koch found that he could transmit anthrax from one animal to another by taking a small sample of blood from the infected animal and injecting it into a healthy one, and this caused the healthy animal to become sick. He also found that he could grow the bacteria in a nutrient broth, then inject it into a healthy animal, and cause illness. Based on these experiments, he devised criteria for establishing a causal link between a microorganism and a disease and these are now known as Koch's postulates. Although these postulates cannot be applied in all cases, they do retain historical importance to the development of scientific thought and are still being used today. The discovery of microorganisms such as Euglena that did not fit into either the animal or plant kingdoms, since they were photosynthetic like plants, but motile like animals, led to the naming of a third kingdom in the 1860s. In 1860 John Hogg called this the Protoctista, and in 1866 Ernst Haeckel named it the Protista. The work of Pasteur and Koch did not accurately reflect the true diversity of the microbial world because of their exclusive focus on microorganisms having direct medical relevance. It was not until the work of Martinus Beijerinck and Sergei Winogradsky late in the nineteenth century that the true breadth of microbiology was revealed. Beijerinck made two major contributions to microbiology: the discovery of viruses and the development of enrichment culture techniques. While his work on the tobacco mosaic virus established the basic principles of virology, it was his development of enrichment culturing that had the most immediate impact on microbiology by allowing for the cultivation of a wide range of microbes with wildly different physiologies. Winogradsky was the first to develop the concept of chemolithotrophy and to thereby reveal the essential role played by microorganisms in geochemical processes. He was responsible for the first isolation and description of both nitrifying and nitrogen-fixing bacteria. French-Canadian microbiologist Felix d'Herelle co-discovered bacteriophages and was one of the earliest applied microbiologists. Classification and structure Microorganisms can be found almost anywhere on Earth. Bacteria and archaea are almost always microscopic, while a number of eukaryotes are also microscopic, including most protists, some fungi, as well as some micro-animals and plants. Viruses are generally regarded as not living and therefore not considered to be microorganisms, although a subfield of microbiology is virology, the study of viruses. Evolution Single-celled microorganisms were the first forms of life to develop on Earth, approximately 3.5 billion years ago. Further evolution was slow, and for about 3 billion years in the Precambrian eon, (much of the history of life on Earth), all organisms were microorganisms. Bacteria, algae and fungi have been identified in amber that is 220 million years old, which shows that the morphology of microorganisms has changed little since at least the Triassic period. The newly discovered biological role played by nickel, however – especially that brought about by volcanic eruptions from the Siberian Traps – may have accelerated the evolution of methanogens towards the end of the Permian–Triassic extinction event. Microorganisms tend to have a relatively fast rate of evolution. Most microorganisms can reproduce rapidly, and bacteria are also able to freely exchange genes through conjugation, transformation and transduction, even between widely divergent species. This horizontal gene transfer, coupled with a high mutation rate and other means of transformation, allows microorganisms to swiftly evolve (via natural selection) to survive in new environments and respond to environmental stresses. This rapid evolution is important in medicine, as it has led to the development of multidrug resistant pathogenic bacteria, superbugs, that are resistant to antibiotics. A possible transitional form of microorganism between a prokaryote and a eukaryote was discovered in 2012 by Japanese scientists. Parakaryon myojinensis is a unique microorganism larger than a typical prokaryote, but with nuclear material enclosed in a membrane as in a eukaryote, and the presence of endosymbionts. This is seen to be the first plausible evolutionary form of microorganism, showing a stage of development from the prokaryote to the eukaryote. Archaea Archaea are prokaryotic unicellular organisms, and form the first domain of life in Carl Woese's three-domain system. A prokaryote is defined as having no cell nucleus or other membrane bound-organelle. Archaea share this defining feature with the bacteria with which they were once grouped. In 1990 the microbiologist Woese proposed the three-domain system that divided living things into bacteria, archaea and eukaryotes, and thereby split the prokaryote domain. Archaea differ from bacteria in both their genetics and biochemistry. For example, while bacterial cell membranes are made from phosphoglycerides with ester bonds, Achaean membranes are made of ether lipids. Archaea were originally described as extremophiles living in extreme environments, such as hot springs, but have since been found in all types of habitats. Only now are scientists beginning to realize how common archaea are in the environment, with Thermoproteota (formerly Crenarchaeota) being the most common form of life in the ocean, dominating ecosystems below in depth. These organisms are also common in soil and play a vital role in ammonia oxidation. The combined domains of archaea and bacteria make up the most diverse and abundant group of organisms on Earth and inhabit practically all environments where the temperature is below +. They are found in water, soil, air, as the microbiome of an organism, hot springs and even deep beneath the Earth's crust in rocks. The number of prokaryotes is estimated to be around five nonillion, or 5 × 1030, accounting for at least half the biomass on Earth. The biodiversity of the prokaryotes is unknown, but may be very large. A May 2016 estimate, based on laws of scaling from known numbers of species against the size of organism, gives an estimate of perhaps 1 trillion species on the planet, of which most would be microorganisms. Currently, only one-thousandth of one percent of that total have been described. Archael cells of some species aggregate and transfer DNA from one cell to another through direct contact, particularly under stressful environmental conditions that cause DNA damage. Bacteria Like archaea, bacteria are prokaryotic – unicellular, and having no cell nucleus or other membrane-bound organelle. Bacteria are microscopic, with a few extremely rare exceptions, such as Thiomargarita namibiensis. Bacteria function and reproduce as individual cells, but they can often aggregate in multicellular colonies. Some species such as myxobacteria can aggregate into complex swarming structures, operating as multicellular groups as part of their life cycle, or form clusters in bacterial colonies such as E.coli. Their genome is usually a circular bacterial chromosome – a single loop of DNA, although they can also harbor small pieces of DNA called plasmids. These plasmids can be transferred between cells through bacterial conjugation. Bacteria have an enclosing cell wall, which provides strength and rigidity to their cells. They reproduce by binary fission or sometimes by budding, but do not undergo meiotic sexual reproduction. However, many bacterial species can transfer DNA between individual cells by a horizontal gene transfer process referred to as natural transformation. Some species form extraordinarily resilient spores, but for bacteria this is a mechanism for survival, not reproduction. Under optimal conditions bacteria can grow extremely rapidly and their numbers can double as quickly as every 20 minutes. Eukaryotes Most living things that are visible to the naked eye in their adult form are eukaryotes, including humans. However, many eukaryotes are also microorganisms. Unlike bacteria and archaea, eukaryotes contain organelles such as the cell nucleus, the Golgi apparatus and mitochondria in their cells. The nucleus is an organelle that houses the DNA that makes up a cell's genome. DNA (Deoxyribonucleic acid) itself is arranged in complex chromosomes. Mitochondria are organelles vital in metabolism as they are the site of the citric acid cycle and oxidative phosphorylation. They evolved from symbiotic bacteria and retain a remnant genome. Like bacteria, plant cells have cell walls, and contain organelles such as chloroplasts in addition to the organelles in other eukaryotes. Chloroplasts produce energy from light by photosynthesis, and were also originally symbiotic bacteria. Unicellular eukaryotes consist of a single cell throughout their life cycle. This qualification is significant since most multicellular eukaryotes consist of a single cell called a zygote only at the beginning of their life cycles. Microbial eukaryotes can be either haploid or diploid, and some organisms have multiple cell nuclei. Unicellular eukaryotes usually reproduce asexually by mitosis under favorable conditions. However, under stressful conditions such as nutrient limitations and other conditions associated with DNA damage, they tend to reproduce sexually by meiosis and syngamy. Protists Of eukaryotic groups, the protists are most commonly unicellular and microscopic. This is a highly diverse group of organisms that are not easy to classify. Several algae species are multicellular protists, and slime molds have unique life cycles that involve switching between unicellular, colonial, and multicellular forms. The number of species of protists is unknown since only a small proportion has been identified. Protist diversity is high in oceans, deep sea-vents, river sediment and an acidic river, suggesting that many eukaryotic microbial communities may yet be discovered. Fungi The fungi have several unicellular species, such as baker's yeast (Saccharomyces cerevisiae) and fission yeast (Schizosaccharomyces pombe). Some fungi, such as the pathogenic yeast Candida albicans, can undergo phenotypic switching and grow as single cells in some environments, and filamentous hyphae in others. Plants The green algae are a large group of photosynthetic eukaryotes that include many microscopic organisms. Although some green algae are classified as protists, others such as charophyta are classified with embryophyte plants, which are the most familiar group of land plants. Algae can grow as single cells, or in long chains of cells. The green algae include unicellular and colonial flagellates, usually but not always with two flagella per cell, as well as various colonial, coccoid, and filamentous forms. In the Charales, which are the algae most closely related to higher plants, cells differentiate into several distinct tissues within the organism. There are about 6000 species of green algae. Ecology Microorganisms are found in almost every habitat present in nature, including hostile environments such as the North and South poles, deserts, geysers, and rocks. They also include all the marine microorganisms of the oceans and deep sea. Some types of microorganisms have adapted to extreme environments and sustained colonies; these organisms are known as extremophiles. Extremophiles have been isolated from rocks as much as 7 kilometres below the Earth's surface, and it has been suggested that the amount of organisms living below the Earth's surface is comparable with the amount of life on or above the surface. Extremophiles have been known to survive for a prolonged time in a vacuum, and can be highly resistant to radiation, which may even allow them to survive in space. Many types of microorganisms have intimate symbiotic relationships with other larger organisms; some of which are mutually beneficial (mutualism), while others can be damaging to the host organism (parasitism). If microorganisms can cause disease in a host they are known as pathogens and then they are sometimes referred to as microbes. Microorganisms play critical roles in Earth's biogeochemical cycles as they are responsible for decomposition and nitrogen fixation. Bacteria use regulatory networks that allow them to adapt to almost every environmental niche on earth. A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress. A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals. Extremophiles Extremophiles are microorganisms that have adapted so that they can survive and even thrive in extreme environments that are normally fatal to most life-forms. Thermophiles and hyperthermophiles thrive in high temperatures. Psychrophiles thrive in extremely low temperatures. – Temperatures as high as , as low as Halophiles such as Halobacterium salinarum (an archaean) thrive in high salt conditions, up to saturation. Alkaliphiles thrive in an alkaline pH of about 8.5–11. Acidophiles can thrive in a pH of 2.0 or less. Piezophiles thrive at very high pressures: up to 1,000–2,000 atm, down to 0 atm as in a vacuum of space. A few extremophiles such as Deinococcus radiodurans are radioresistant, resisting radiation exposure of up to 5k Gy. Extremophiles are significant in different ways. They extend terrestrial life into much of the Earth's hydrosphere, crust and atmosphere, their specific evolutionary adaptation mechanisms to their extreme environment can be exploited in biotechnology, and their very existence under such extreme conditions increases the potential for extraterrestrial life. Plants and soil The nitrogen cycle in soils depends on the fixation of atmospheric nitrogen. This is achieved by a number of diazotrophs. One way this can occur is in the root nodules of legumes that contain symbiotic bacteria of the genera Rhizobium, Mesorhizobium, Sinorhizobium, Bradyrhizobium, and Azorhizobium. The roots of plants create a narrow region known as the rhizosphere that supports many microorganisms known as the root microbiome. These microorganisms in the root microbiome are able to interact with each other and surrounding plants through signals and cues. For example, mycorrhizal fungi are able to communicate with the root systems of many plants through chemical signals between both the plant and fungi. This results in a mutualistic symbiosis between the two. However, these signals can be eavesdropped by other microorganisms, such as the soil bacteria, Myxococcus xanthus, which preys on other bacteria. Eavesdropping, or the interception of signals from unintended receivers, such as plants and microorganisms, can lead to large-scale, evolutionary consequences. For example, signaler-receiver pairs, like plant-microorganism pairs, may lose the ability to communicate with neighboring populations because of variability in eavesdroppers. In adapting to avoid local eavesdroppers, signal divergence could occur and thus, lead to the isolation of plants and microorganisms from the inability to communicate with other populations. Symbiosis A lichen is a symbiosis of a macroscopic fungus with photosynthetic microbial algae or cyanobacteria. Applications Microorganisms are useful in producing foods, treating waste water, creating biofuels and a wide range of chemicals and enzymes. They are invaluable in research as model organisms. They have been weaponised and sometimes used in warfare and bioterrorism. They are vital to agriculture through their roles in maintaining soil fertility and in decomposing organic matter. They also have applications in aquaculture, such as in biofloc technology. Food production Microorganisms are used in a fermentation process to make yoghurt, cheese, curd, kefir, ayran, xynogala, and other types of food. Fermentation cultures provide flavour and aroma, and inhibit undesirable organisms. They are used to leaven bread, and to convert sugars to alcohol in wine and beer. Microorganisms are used in brewing, wine making, baking, pickling and other food-making processes. Water treatment These depend for their ability to clean up water contaminated with organic material on microorganisms that can respire dissolved substances. Respiration may be aerobic, with a well-oxygenated filter bed such as a slow sand filter. Anaerobic digestion by methanogens generate useful methane gas as a by-product. Energy Microorganisms are used in fermentation to produce ethanol, and in biogas reactors to produce methane. Scientists are researching the use of algae to produce liquid fuels, and bacteria to convert various forms of agricultural and urban waste into usable fuels. Chemicals, enzymes Microorganisms are used to produce many commercial and industrial chemicals, enzymes and other bioactive molecules. Organic acids produced on a large industrial scale by microbial fermentation include acetic acid produced by acetic acid bacteria such as Acetobacter aceti, butyric acid made by the bacterium Clostridium butyricum, lactic acid made by Lactobacillus and other lactic acid bacteria, and citric acid produced by the mould fungus Aspergillus niger. Microorganisms are used to prepare bioactive molecules such as Streptokinase from the bacterium Streptococcus, Cyclosporin A from the ascomycete fungus Tolypocladium inflatum, and statins produced by the yeast Monascus purpureus. Science Microorganisms are essential tools in biotechnology, biochemistry, genetics, and molecular biology. The yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe are important model organisms in science, since they are simple eukaryotes that can be grown rapidly in large numbers and are easily manipulated. They are particularly valuable in genetics, genomics and proteomics. Microorganisms can be harnessed for uses such as creating steroids and treating skin diseases. Scientists are also considering using microorganisms for living fuel cells, and as a solution for pollution. Warfare In the Middle Ages, as an early example of biological warfare, diseased corpses were thrown into castles during sieges using catapults or other siege engines. Individuals near the corpses were exposed to the pathogen and were likely to spread that pathogen to others. In modern times, bioterrorism has included the 1984 Rajneeshee bioterror attack and the 1993 release of anthrax by Aum Shinrikyo in Tokyo. Soil Microbes can make nutrients and minerals in the soil available to plants, produce hormones that spur growth, stimulate the plant immune system and trigger or dampen stress responses. In general a more diverse set of soil microbes results in fewer plant diseases and higher yield. Human health Human gut flora Microorganisms can form an endosymbiotic relationship with other, larger organisms. For example, microbial symbiosis plays a crucial role in the immune system. The microorganisms that make up the gut flora in the gastrointestinal tract contribute to gut immunity, synthesize vitamins such as folic acid and biotin, and ferment complex indigestible carbohydrates. Some microorganisms that are seen to be beneficial to health are termed probiotics and are available as dietary supplements, or food additives. Disease Microorganisms are the causative agents (pathogens) in many infectious diseases. The organisms involved include pathogenic bacteria, causing diseases such as plague, tuberculosis and anthrax; protozoan parasites, causing diseases such as malaria, sleeping sickness, dysentery and toxoplasmosis; and also fungi causing diseases such as ringworm, candidiasis or histoplasmosis. However, other diseases such as influenza, yellow fever or AIDS are caused by pathogenic viruses, which are not usually classified as living organisms and are not, therefore, microorganisms by the strict definition. No clear examples of archaean pathogens are known, although a relationship has been proposed between the presence of some archaean methanogens and human periodontal disease. Numerous microbial pathogens are capable of sexual processes that appear to facilitate their survival in their infected host. Hygiene Hygiene is a set of practices to avoid infection or food spoilage by eliminating microorganisms from the surroundings. As microorganisms, in particular bacteria, are found virtually everywhere, harmful microorganisms may be reduced to acceptable levels rather than actually eliminated. In food preparation, microorganisms are reduced by preservation methods such as cooking, cleanliness of utensils, short storage periods, or by low temperatures. If complete sterility is needed, as with surgical equipment, an autoclave is used to kill microorganisms with heat and pressure. In fiction Osmosis Jones, a 2001 film, and its show Ozzy & Drix, set in a stylized version of the human body, featured anthropomorphic microorganisms. War of the Worlds (2005 film), when alien lifeforms attempt to conquer Earth, they are ultimately defeated by a common microbe to which humans are immune.
Biology and health sciences
Biology basics
Biology
20406
https://en.wikipedia.org/wiki/M-theory
M-theory
M-theory is a theory in physics that unifies all consistent versions of superstring theory. Edward Witten first conjectured the existence of such a theory at a string theory conference at the University of Southern California in 1995. Witten's announcement initiated a flurry of research activity known as the second superstring revolution. Prior to Witten's announcement, string theorists had identified five versions of superstring theory. Although these theories initially appeared to be very different, work by many physicists showed that the theories were related in intricate and nontrivial ways. Physicists found that apparently distinct theories could be unified by mathematical transformations called S-duality and T-duality. Witten's conjecture was based in part on the existence of these dualities and in part on the relationship of the string theories to a field theory called eleven-dimensional supergravity. Although a complete formulation of M-theory is not known, such a formulation should describe two- and five-dimensional objects called branes and should be approximated by eleven-dimensional supergravity at low energies. Modern attempts to formulate M-theory are typically based on matrix theory or the AdS/CFT correspondence. According to Witten, M should stand for "magic", "mystery" or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known. Investigations of the mathematical structure of M-theory have spawned important theoretical results in physics and mathematics. More speculatively, M-theory may provide a framework for developing a unified theory of all of the fundamental forces of nature. Attempts to connect M-theory to experiment typically focus on compactifying its extra dimensions to construct candidate models of the four-dimensional world, although so far none have been verified to give rise to physics as observed in high-energy physics experiments. Background Quantum gravity and strings One of the deepest problems in modern physics is the problem of quantum gravity. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which is formulated within the framework of classical physics. However, nongravitational forces are described within the framework of quantum mechanics, a radically different formalism for describing physical phenomena based on probability. A quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, but difficulties arise when one attempts to apply the usual prescriptions of quantum theory to the force of gravity. String theory is a theoretical framework that attempts to reconcile gravity and quantum mechanics. In string theory, the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other. In a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In this way, all of the different elementary particles may be viewed as vibrating strings. One of the vibrational states of a string gives rise to the graviton, a quantum mechanical particle that carries gravitational force. There are several versions of string theory: type I, type IIA, type IIB, and two flavors of heterotic string theory ( and ). The different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings (which are segments with endpoints) and closed strings (which form closed loops), while types IIA and IIB include only closed strings. Each of these five string theories arises as a special limiting case of M-theory. This theory, like its string theory predecessors, is an example of a quantum theory of gravity. It describes a force just like the familiar gravitational force subject to the rules of quantum mechanics. Number of dimensions In everyday life, there are three familiar dimensions of space: height, width and depth. Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modeled as separate entities but are instead unified to a four-dimensional spacetime, three spatial dimensions and one time dimension. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime. In spite of the fact that the universe is well described by four-dimensional spacetime, there are several reasons why physicists consider theories in other dimensions. In some cases, by modeling spacetime in a different number of dimensions, a theory becomes more mathematically tractable, and one can perform calculations and gain general insights more easily. There are also situations where theories in two or three spacetime dimensions are useful for describing phenomena in condensed matter physics. Finally, there exist scenarios in which there could actually be more than four dimensions of spacetime which have nonetheless managed to escape detection. One notable feature of string theory and M-theory is that these theories require extra dimensions of spacetime for their mathematical consistency. In string theory, spacetime is ten-dimensional (nine spatial dimensions, and one time dimension), while in M-theory it is eleven-dimensional (ten spatial dimensions, and one time dimension). In order to describe real physical phenomena using these theories, one must therefore imagine scenarios in which these extra dimensions would not be observed in experiments. Compactification is one way of modifying the number of dimensions in a physical theory. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled-up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions. Dualities Theories that arise as different limits of M-theory turn out to be related in highly nontrivial ways. One of the relationships that can exist between these different physical theories is called S-duality. This is a relationship which says that a collection of strongly interacting particles in one theory can, in some cases, be viewed as a collection of weakly interacting particles in a completely different theory. Roughly speaking, a collection of particles is said to be strongly interacting if they combine and decay often and weakly interacting if they do so infrequently. Type I string theory turns out to be equivalent by S-duality to the heterotic string theory. Similarly, type IIB string theory is related to itself in a nontrivial way by S-duality. Another relationship between different string theories is T-duality. Here one considers strings propagating around a circular extra dimension. T-duality states that a string propagating around a circle of radius is equivalent to a string propagating around a circle of radius in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum and winding number in one description, it will have momentum and winding number in the dual description. For example, type IIA string theory is equivalent to type IIB string theory via T-duality, and the two versions of heterotic string theory are also related by T-duality. In general, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory. The two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena. Supersymmetry Another important theoretical idea that plays a role in M-theory is supersymmetry. This is a mathematical relation that exists in certain physical theories between a class of particles called bosons and a class of particles called fermions. Roughly speaking, fermions are the constituents of matter, while bosons mediate interactions between particles. In theories with supersymmetry, each boson has a counterpart which is a fermion, and vice versa. When supersymmetry is imposed as a local symmetry, one automatically obtains a quantum mechanical theory that includes gravity. Such a theory is called a supergravity theory. A theory of strings that incorporates the idea of supersymmetry is called a superstring theory. There are several different versions of superstring theory which are all subsumed within the M-theory framework. At low energies, superstring theories are approximated by one of the three supergravities in ten dimensions, known as type I, type IIA, and type IIB supergravity. Similarly, M-theory is approximated at low energies by supergravity in eleven dimensions. Branes In string theory and related theories such as supergravity theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension , these are called -branes. Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They can have mass and other attributes such as charge. A -brane sweeps out a -dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane. The word brane comes from the word "membrane" which refers to a two-dimensional brane. In string theory, the fundamental objects that give rise to elementary particles are the one-dimensional strings. Although the physical phenomena described by M-theory are still poorly understood, physicists know that the theory describes two- and five-dimensional branes. Much of the current research in M-theory attempts to better understand the properties of these branes. History and development Kaluza–Klein theory In the early 20th century, physicists and mathematicians including Albert Einstein and Hermann Minkowski pioneered the use of four-dimensional geometry for describing the physical world. These efforts culminated in the formulation of Einstein's general theory of relativity, which relates gravity to the geometry of four-dimensional spacetime. The success of general relativity led to efforts to apply higher dimensional geometry to explain other forces. In 1919, work by Theodor Kaluza showed that by passing to five-dimensional spacetime, one can unify gravity and electromagnetism into a single force. This idea was improved by physicist Oskar Klein, who suggested that the additional dimension proposed by Kaluza could take the form of a circle with radius around cm. The Kaluza–Klein theory and subsequent attempts by Einstein to develop unified field theory were never completely successful. In part this was because Kaluza–Klein theory predicted a particle (the radion), that has never been shown to exist, and in part because it was unable to correctly predict the ratio of an electron's mass to its charge. In addition, these theories were being developed just as other physicists were beginning to discover quantum mechanics, which would ultimately prove successful in describing known forces such as electromagnetism, as well as new nuclear forces that were being discovered throughout the middle part of the century. Thus it would take almost fifty years for the idea of new dimensions to be taken seriously again. Early work on supergravity New concepts and mathematical tools provided fresh insights into general relativity, giving rise to a period in the 1960s–1970s now known as the golden age of general relativity. In the mid-1970s, physicists began studying higher-dimensional theories combining general relativity with supersymmetry, the so-called supergravity theories. General relativity does not place any limits on the possible dimensions of spacetime. Although the theory is typically formulated in four dimensions, one can write down the same equations for the gravitational field in any number of dimensions. Supergravity is more restrictive because it places an upper limit on the number of dimensions. In 1978, work by Werner Nahm showed that the maximum spacetime dimension in which one can formulate a consistent supersymmetric theory is eleven. In the same year, Eugène Cremmer, Bernard Julia, and Joël Scherk of the École Normale Supérieure showed that supergravity not only permits up to eleven dimensions but is in fact most elegant in this maximal number of dimensions. Initially, many physicists hoped that by compactifying eleven-dimensional supergravity, it might be possible to construct realistic models of our four-dimensional world. The hope was that such models would provide a unified description of the four fundamental forces of nature: electromagnetism, the strong and weak nuclear forces, and gravity. Interest in eleven-dimensional supergravity soon waned as various flaws in this scheme were discovered. One of the problems was that the laws of physics appear to distinguish between clockwise and counterclockwise, a phenomenon known as chirality. Edward Witten and others observed this chirality property cannot be readily derived by compactifying from eleven dimensions. In the first superstring revolution in 1984, many physicists turned to string theory as a unified theory of particle physics and quantum gravity. Unlike supergravity theory, string theory was able to accommodate the chirality of the standard model, and it provided a theory of gravity consistent with quantum effects. Another feature of string theory that many physicists were drawn to in the 1980s and 1990s was its high degree of uniqueness. In ordinary particle theories, one can consider any collection of elementary particles whose classical behavior is described by an arbitrary Lagrangian. In string theory, the possibilities are much more constrained: by the 1990s, physicists had argued that there were only five consistent supersymmetric versions of the theory. Relationships between string theories Although there were only a handful of consistent superstring theories, it remained a mystery why there was not just one consistent formulation. However, as physicists began to examine string theory more closely, they realized that these theories are related in intricate and nontrivial ways. In the late 1970s, Claus Montonen and David Olive had conjectured a special property of certain physical theories. A sharpened version of their conjecture concerns a theory called supersymmetric Yang–Mills theory, which describes theoretical particles formally similar to the quarks and gluons that make up atomic nuclei. The strength with which the particles of this theory interact is measured by a number called the coupling constant. The result of Montonen and Olive, now known as Montonen–Olive duality, states that supersymmetric Yang–Mills theory with coupling constant is equivalent to the same theory with coupling constant . In other words, a system of strongly interacting particles (large coupling constant) has an equivalent description as a system of weakly interacting particles (small coupling constant) and vice versa by spin-moment. In the 1990s, several theorists generalized Montonen–Olive duality to the S-duality relationship, which connects different string theories. Ashoke Sen studied S-duality in the context of heterotic strings in four dimensions. Chris Hull and Paul Townsend showed that type IIB string theory with a large coupling constant is equivalent via S-duality to the same theory with small coupling constant. Theorists also found that different string theories may be related by T-duality. This duality implies that strings propagating on completely different spacetime geometries may be physically equivalent. Membranes and fivebranes String theory extends ordinary particle physics by replacing zero-dimensional point particles by one-dimensional objects called strings. In the late 1980s, it was natural for theorists to attempt to formulate other extensions in which particles are replaced by two-dimensional supermembranes or by higher-dimensional objects called branes. Such objects had been considered as early as 1962 by Paul Dirac, and they were reconsidered by a small but enthusiastic group of physicists in the 1980s. Supersymmetry severely restricts the possible number of dimensions of a brane. In 1987, Eric Bergshoeff, Ergin Sezgin, and Paul Townsend showed that eleven-dimensional supergravity includes two-dimensional branes. Intuitively, these objects look like sheets or membranes propagating through the eleven-dimensional spacetime. Shortly after this discovery, Michael Duff, Paul Howe, Takeo Inami, and Kellogg Stelle considered a particular compactification of eleven-dimensional supergravity with one of the dimensions curled up into a circle. In this setting, one can imagine the membrane wrapping around the circular dimension. If the radius of the circle is sufficiently small, then this membrane looks just like a string in ten-dimensional spacetime. In fact, Duff and his collaborators showed that this construction reproduces exactly the strings appearing in type IIA superstring theory. In 1990, Andrew Strominger published a similar result which suggested that strongly interacting strings in ten dimensions might have an equivalent description in terms of weakly interacting five-dimensional branes. Initially, physicists were unable to prove this relationship for two important reasons. On the one hand, the Montonen–Olive duality was still unproven, and so Strominger's conjecture was even more tenuous. On the other hand, there were many technical issues related to the quantum properties of five-dimensional branes. The first of these problems was solved in 1993 when Ashoke Sen established that certain physical theories require the existence of objects with both electric and magnetic charge which were predicted by the work of Montonen and Olive. In spite of this progress, the relationship between strings and five-dimensional branes remained conjectural because theorists were unable to quantize the branes. Starting in 1991, a team of researchers including Michael Duff, Ramzi Khuri, Jianxin Lu, and Ruben Minasian considered a special compactification of string theory in which four of the ten dimensions curl up. If one considers a five-dimensional brane wrapped around these extra dimensions, then the brane looks just like a one-dimensional string. In this way, the conjectured relationship between strings and branes was reduced to a relationship between strings and strings, and the latter could be tested using already established theoretical techniques. Second superstring revolution Speaking at the string theory conference at the University of Southern California in 1995, Edward Witten of the Institute for Advanced Study made the surprising suggestion that all five superstring theories were in fact just different limiting cases of a single theory in eleven spacetime dimensions. Witten's announcement drew together all of the previous results on S- and T-duality and the appearance of two- and five-dimensional branes in string theory. In the months following Witten's announcement, hundreds of new papers appeared on the Internet confirming that the new theory involved membranes in an important way. Today this flurry of work is known as the second superstring revolution. One of the important developments following Witten's announcement was Witten's work in 1996 with string theorist Petr Hořava. Witten and Hořava studied M-theory on a special spacetime geometry with two ten-dimensional boundary components. Their work shed light on the mathematical structure of M-theory and suggested possible ways of connecting M-theory to real world physics. Origin of the term Initially, some physicists suggested that the new theory was a fundamental theory of membranes, but Witten was skeptical of the role of membranes in the theory. In a paper from 1996, Hořava and Witten wrote In the absence of an understanding of the true meaning and structure of M-theory, Witten has suggested that the M should stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known. Years later, he would state, "I thought my colleagues would understand that it really stood for membrane. Unfortunately, it got people confused." Matrix theory BFSS matrix model In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics. One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting. Noncommutative geometry In geometry, it is often useful to introduce coordinates. For example, in order to study the geometry of the Euclidean plane, one defines the coordinates and as the distances between any point in the plane and a pair of axes. In ordinary geometry, the coordinates of a point are numbers, so they can be multiplied, and the product of two coordinates does not depend on the order of multiplication. That is, . This property of multiplication is known as the commutative law, and this relationship between geometry and the commutative algebra of coordinates is the starting point for much of modern geometry. Noncommutative geometry is a branch of mathematics that attempts to generalize this situation. Rather than working with ordinary numbers, one considers some similar objects, such as matrices, whose multiplication does not satisfy the commutative law (that is, objects for which is not necessarily equal to ). One imagines that these noncommuting objects are coordinates on some more general notion of "space" and proves theorems about these generalized spaces by exploiting the analogy with ordinary geometry. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which the coordinates on spacetime do not satisfy the commutativity property. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories. AdS/CFT correspondence Overview The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. Quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles. One approach to formulating M-theory and studying its properties is provided by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence. Proposed by Juan Maldacena in late 1997, the AdS/CFT correspondence is a theoretical result which implies that M-theory is in some cases equivalent to a quantum field theory. In addition to providing insights into the mathematical structure of string and M-theory, the AdS/CFT correspondence has shed light on many aspects of quantum field theory in regimes where traditional calculational techniques are ineffective. In the AdS/CFT correspondence, the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the left. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior. Now imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface. This construction describes a hypothetical universe with only two space dimensions and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space. An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, within a small region on the surface around any given point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a quantum field theory. The claim is that this quantum field theory is equivalent to the gravitational theory on the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating entities and calculations in one theory into their counterparts in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding. 6D (2,0) superconformal field theory One particular realization of the AdS/CFT correspondence states that M-theory on the product space is equivalent to the so-called (2,0)-theory on the six-dimensional boundary. Here "(2,0)" refers to the particular type of supersymmetry that appears in the theory. In this example, the spacetime of the gravitational theory is effectively seven-dimensional (hence the notation ), and there are four additional "compact" dimensions (encoded by the factor). In the real world, spacetime is four-dimensional, at least macroscopically, so this version of the correspondence does not provide a realistic model of gravity. Likewise, the dual theory is not a viable model of any real-world system since it describes a world with six spacetime dimensions. Nevertheless, the (2,0)-theory has proven to be important for studying the general properties of quantum field theories. Indeed, this theory subsumes many mathematically interesting effective quantum field theories and points to new dualities relating these theories. For example, Luis Alday, Davide Gaiotto, and Yuji Tachikawa showed that by compactifying this theory on a surface, one obtains a four-dimensional quantum field theory, and there is a duality known as the AGT correspondence which relates the physics of this theory to certain physical concepts associated with the surface itself. More recently, theorists have extended these ideas to study the theories obtained by compactifying down to three dimensions. In addition to its applications in quantum field theory, the (2,0)-theory has spawned important results in pure mathematics. For example, the existence of the (2,0)-theory was used by Witten to give a "physical" explanation for a conjectural relationship in mathematics called the geometric Langlands correspondence. In subsequent work, Witten showed that the (2,0)-theory could be used to understand a concept in mathematics called Khovanov homology. Developed by Mikhail Khovanov around 2000, Khovanov homology provides a tool in knot theory, the branch of mathematics that studies and classifies the different shapes of knots. Another application of the (2,0)-theory in mathematics is the work of Davide Gaiotto, Greg Moore, and Andrew Neitzke, which used physical ideas to derive new results in hyperkähler geometry. ABJM superconformal field theory Another realization of the AdS/CFT correspondence states that M-theory on is equivalent to a quantum field theory called the ABJM theory in three dimensions. In this version of the correspondence, seven of the dimensions of M-theory are curled up, leaving four non-compact dimensions. Since the spacetime of our universe is four-dimensional, this version of the correspondence provides a somewhat more realistic description of gravity. The ABJM theory appearing in this version of the correspondence is also interesting for a variety of reasons. Introduced by Aharony, Bergman, Jafferis, and Maldacena, it is closely related to another quantum field theory called Chern–Simons theory. The latter theory was popularized by Witten in the late 1980s because of its applications to knot theory. In addition, the ABJM theory serves as a semi-realistic simplified model for solving problems that arise in condensed matter physics. Phenomenology Overview In addition to being an idea of considerable theoretical interest, M-theory provides a framework for constructing models of real world physics that combine general relativity with the standard model of particle physics. Phenomenology is the branch of theoretical physics in which physicists construct realistic models of nature from more abstract theoretical ideas. String phenomenology is the part of string theory that attempts to construct realistic models of particle physics based on string and M-theory. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles, usually supersymmetric partners to analogues of known particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. This is a special kind of geometric object named after mathematicians Eugenio Calabi and Shing-Tung Yau. Calabi–Yau manifolds offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct models with physics resembling to some extent that of our four-dimensional world based on M-theory. Partly because of theoretical and mathematical difficulties and partly because of the extremely high energies (beyond what is technologically possible for the foreseeable future) needed to test these theories experimentally, there is so far no experimental evidence that would unambiguously point to any of these models being a correct fundamental description of nature. This has led some in the community to criticize these approaches to unification and question the value of continued research on these problems. Compactification on manifolds In one approach to M-theory phenomenology, theorists assume that the seven extra dimensions of M-theory are shaped like a manifold. This is a special kind of seven-dimensional shape constructed by mathematician Dominic Joyce of the University of Oxford. These manifolds are still poorly understood mathematically, and this fact has made it difficult for physicists to fully develop this approach to phenomenology. For example, physicists and mathematicians often assume that space has a mathematical property called smoothness, but this property cannot be assumed in the case of a manifold if one wishes to recover the physics of our four-dimensional world. Another problem is that manifolds are not complex manifolds, so theorists are unable to use tools from the branch of mathematics known as complex analysis. Finally, there are many open questions about the existence, uniqueness, and other mathematical properties of manifolds, and mathematicians lack a systematic way of searching for these manifolds. Heterotic M-theory Because of the difficulties with manifolds, most attempts to construct realistic theories of physics based on M-theory have taken a more indirect approach to compactifying eleven-dimensional spacetime. One approach, pioneered by Witten, Hořava, Burt Ovrut, and others, is known as heterotic M-theory. In this approach, one imagines that one of the eleven dimensions of M-theory is shaped like a circle. If this circle is very small, then the spacetime becomes effectively ten-dimensional. One then assumes that six of the ten dimensions form a Calabi–Yau manifold. If this Calabi–Yau manifold is also taken to be small, one is left with a theory in four-dimensions. Heterotic M-theory has been used to construct models of brane cosmology in which the observable universe is thought to exist on a brane in a higher dimensional ambient space. It has also spawned alternative theories of the early universe that do not rely on the theory of cosmic inflation.
Physical sciences
Particle physics: General
Physics
20412
https://en.wikipedia.org/wiki/MATLAB
MATLAB
MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages. Although MATLAB is intended primarily for numeric computing, an optional toolbox uses the MuPAD symbolic engine allowing access to symbolic computing abilities. An additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and embedded systems. , MATLAB has more than four million users worldwide. They come from various backgrounds of engineering, science, and economics. , more than 5000 global colleges and universities use MATLAB to support instruction and research. History Origins MATLAB was invented by mathematician and computer programmer Cleve Moler. The idea for MATLAB was based on his 1960s PhD thesis. Moler became a math professor at the University of New Mexico and started developing MATLAB for his students as a hobby. He developed MATLAB's initial linear algebra programming in 1967 with his one-time thesis advisor, George Forsythe. This was followed by Fortran code for linear equations in 1971. Before version 1.0, MATLAB "was not a programming language; it was a simple interactive matrix calculator. There were no programs, no toolboxes, no graphics. And no ODEs or FFTs." The first early version of MATLAB was completed in the late 1970s. The software was disclosed to the public for the first time in February 1979 at the Naval Postgraduate School in California. Early versions of MATLAB were simple matrix calculators with 71 pre-built functions. At the time, MATLAB was distributed for free to universities. Moler would leave copies at universities he visited and the software developed a strong following in the math departments of university campuses. In the 1980s, Cleve Moler met John N. Little. They decided to reprogram MATLAB in C and market it for the IBM desktops that were replacing mainframe computers at the time. John Little and programmer Steve Bangert re-programmed MATLAB in C, created the MATLAB programming language, and developed features for toolboxes. Since 1993 an open source alternative, GNU Octave (mostly compatible with matlab) and scilab (similar to matlab) have been available. Commercial development MATLAB was first released as a commercial product in 1984 at the Automatic Control Conference in Las Vegas. MathWorks, Inc. was founded to develop the software and the MATLAB programming language was released. The first MATLAB sale was the following year, when Nick Trefethen from the Massachusetts Institute of Technology bought ten copies. By the end of the 1980s, several hundred copies of MATLAB had been sold to universities for student use. The software was popularized largely thanks to toolboxes created by experts in various fields for performing specialized mathematical tasks. Many of the toolboxes were developed as a result of Stanford students that used MATLAB in academia, then brought the software with them to the private sector. Over time, MATLAB was re-written for early operating systems created by Digital Equipment Corporation, VAX, Sun Microsystems, and for Unix PCs. Version 3 was released in 1987. The first MATLAB compiler was developed by Stephen C. Johnson in the 1990s. In 2000, MathWorks added a Fortran-based library for linear algebra in MATLAB 6, replacing the software's original LINPACK and EISPACK subroutines that were in C. MATLAB's Parallel Computing Toolbox was released at the 2004 Supercomputing Conference and support for graphics processing units (GPUs) was added to it in 2010. Recent history Some especially large changes to the software were made with version 8 in 2012. The user interface was reworked and Simulink's functionality was expanded. By 2016, MATLAB had introduced several technical and user interface improvements, including the MATLAB Live Editor notebook, and other features. Release history For a complete list of changes of both MATLAB an official toolboxes, check MATLAB previous releases . Syntax The MATLAB application is built around the MATLAB programming language. Common usage of the MATLAB application involves using the "Command Window" as an interactive mathematical shell or executing text files containing MATLAB code. "Hello, world!" example An example of a "Hello, world!" program exists in MATLAB. disp('Hello, world!') It displays like so: Hello, world! Variables Variables are defined using the assignment operator, =. MATLAB is a weakly typed programming language because types are implicitly converted. It is an inferred typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects, and that their type can change. Values can come from constants, from computation involving values of other variables, or from the output of a function. For example: >> x = 17 x = 17 >> x = 'hat' x = hat >> x = [3*4, pi/2] x = 12.0000 1.5708 >> y = 3*sin(x) y = -1.6097 3.0000 Vectors and matrices A simple array is defined using the colon syntax: initial:increment:terminator. For instance: >> array = 1:2:9 array = 1 3 5 7 9 defines a variable named array (or assigns a new value to an existing variable with the name array) which is an array consisting of the values 1, 3, 5, 7, and 9. That is, the array starts at 1 (the initial value), increments with each step from the previous value by 2 (the increment value), and stops once it reaches (or is about to exceed) 9 (the terminator value). The increment value can actually be left out of this syntax (along with one of the colons), to use a default value of 1. >> ari = 1:5 ari = 1 2 3 4 5 assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since the default value of 1 is used as the increment. Indexing is one-based, which is the usual convention for matrices in mathematics, unlike zero-based indexing commonly used in other programming languages such as C, C++, and Java. Matrices can be defined by separating the elements of a row with blank space or comma and using a semicolon to separate the rows. The list of elements should be surrounded by square brackets []. Parentheses () are used to access elements and subarrays (they are also used to denote a function argument list). >> A = [16, 3, 2, 13 ; 5, 10, 11, 8 ; 9, 6, 7, 12 ; 4, 15, 14, 1] A = 16 3 2 13 5 10 11 8 9 6 7 12 4 15 14 1 >> A(2,3) ans = 11 Sets of indices can be specified by expressions such as 2:4, which evaluates to [2, 3, 4]. For example, a submatrix taken from rows 2 through 4 and columns 3 through 4 can be written as: >> A(2:4,3:4) ans = 11 8 7 12 14 1 A square identity matrix of size n can be generated using the function eye, and matrices of any size with zeros or ones can be generated with the functions zeros and ones, respectively. >> eye(3,3) ans = 1 0 0 0 1 0 0 0 1 >> zeros(2,3) ans = 0 0 0 0 0 0 >> ones(2,3) ans = 1 1 1 1 1 1 Transposing a vector or a matrix is done either by the function transpose or by adding dot-prime after the matrix (without the dot, prime will perform conjugate transpose for complex arrays): >> A = [1 ; 2], B = A.', C = transpose(A) A = 1 2 B = 1 2 C = 1 2 >> D = [0, 3 ; 1, 5], D.' D = 0 3 1 5 ans = 0 1 3 5 Most functions accept arrays as input and operate element-wise on each element. For example, mod(2*J,n) will multiply every element in J by 2, and then reduce each element modulo n. MATLAB does include standard for and while loops, but (as in other similar applications such as APL and R), using the vectorized notation is encouraged and is often faster to execute. The following code, excerpted from the function magic.m, creates a magic square M for odd values of n (MATLAB function meshgrid is used here to generate square matrices and containing ): [J,I] = meshgrid(1:n); A = mod(I + J - (n + 3) / 2, n); B = mod(I + 2 * J - 2, n); M = n * A + B + 1; Structures MATLAB supports structure data types. Since all variables in MATLAB are arrays, a more adequate name is "structure array", where each element of the array has the same field names. In addition, MATLAB supports dynamic field names (field look-ups by name, field manipulations, etc.). Functions When creating a MATLAB function, the name of the file should match the name of the first function in the file. Valid function names begin with an alphabetic character, and can contain letters, numbers, or underscores. Variables and functions are case sensitive. Function handles MATLAB supports elements of lambda calculus by introducing function handles, or function references, which are implemented either in .m files or anonymous/nested functions. Classes and object-oriented programming MATLAB supports object-oriented programming including classes, inheritance, virtual dispatch, packages, pass-by-value semantics, and pass-by-reference semantics. However, the syntax and calling conventions are significantly different from other languages. MATLAB has value classes and reference classes, depending on whether the class has handle as a super-class (for reference classes) or not (for value classes). Method call behavior is different between value and reference classes. For example, a call to a method: object.method(); can alter any member of object only if object is an instance of a reference class, otherwise value class methods must return a new instance if it needs to modify the object. An example of a simple class is provided below: classdef Hello methods function greet(obj) disp('Hello!') end end end When put into a file named hello.m, this can be executed with the following commands: >> x = Hello(); >> x.greet(); Hello! Graphics and graphical user interface programming MATLAB has tightly integrated graph-plotting features. For example, the function plot can be used to produce a graph from two vectors x and y. The code: x = 0:pi/100:2*pi; y = sin(x); plot(x,y) produces the following figure of the sine function: MATLAB supports three-dimensional graphics as well: MATLAB supports developing graphical user interface (GUI) applications. UIs can be generated either programmatically or using visual design environments such as GUIDE and App Designer. MATLAB and other languages MATLAB can call functions and subroutines written in the programming languages C or Fortran. A wrapper function is created allowing MATLAB data types to be passed and returned. MEX files (MATLAB executables) are the dynamically loadable object files created by compiling such functions. Since 2014 increasing two-way interfacing with Python was being added. Libraries written in Perl, Java, ActiveX or .NET can be directly called from MATLAB, and many MATLAB libraries (for example XML or SQL support) are implemented as wrappers around Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be done with a MATLAB toolbox which is sold separately by MathWorks, or using an undocumented mechanism called JMI (Java-to-MATLAB Interface), (which should not be confused with the unrelated Java Metadata Interface that is also called JMI). Official MATLAB API for Java was added in 2016. As alternatives to the MuPAD based Symbolic Math Toolbox available from MathWorks, MATLAB can be connected to Maple or Mathematica. Libraries also exist to import and export MathML. Relations to US sanctions In 2020, MATLAB withdrew services from two Chinese universities as a result of US sanctions. The universities said this will be responded to by increased use of open-source alternatives and by developing domestic alternatives.
Technology
Science and Engineering
null
20423
https://en.wikipedia.org/wiki/Malaria
Malaria
Malaria is a mosquito-borne infectious disease that affects vertebrates and Anopheles mosquitoes. Human malaria causes symptoms that typically include fever, fatigue, vomiting, and headaches. In severe cases, it can cause jaundice, seizures, coma, or death. Symptoms usually begin 10 to 15 days after being bitten by an infected Anopheles mosquito. If not properly treated, people may have recurrences of the disease months later. In those who have recently survived an infection, reinfection usually causes milder symptoms. This partial resistance disappears over months to years if the person has no continuing exposure to malaria. The mosquito vector is itself harmed by Plasmodium infections, causing reduced lifespan. Human malaria is caused by single-celled microorganisms of the Plasmodium group. It is spread exclusively through bites of infected female Anopheles mosquitoes. The mosquito bite introduces the parasites from the mosquito's saliva into a person's blood. The parasites travel to the liver, where they mature and reproduce. Five species of Plasmodium commonly infect humans. The three species associated with more severe cases are P. falciparum (which is responsible for the vast majority of malaria deaths), P. vivax, and P. knowlesi (a simian malaria that spills over into thousands of people a year). P. ovale and P. malariae generally cause a milder form of malaria. Malaria is typically diagnosed by the microscopic examination of blood using blood films, or with antigen-based rapid diagnostic tests. Methods that use the polymerase chain reaction to detect the parasite's DNA have been developed, but they are not widely used in areas where malaria is common, due to their cost and complexity. The risk of disease can be reduced by preventing mosquito bites through the use of mosquito nets and insect repellents or with mosquito-control measures such as spraying insecticides and draining standing water. Several medications are available to prevent malaria for travellers in areas where the disease is common. Occasional doses of the combination medication sulfadoxine/pyrimethamine are recommended in infants and after the first trimester of pregnancy in areas with high rates of malaria. As of 2023, two malaria vaccines have been endorsed by the World Health Organization. The recommended treatment for malaria is a combination of antimalarial medications that includes artemisinin. The second medication may be either mefloquine (noting first its potential toxicity and the possibility of death), lumefantrine, or sulfadoxine/pyrimethamine. Quinine, along with doxycycline, may be used if artemisinin is not available. In areas where the disease is common, malaria should be confirmed if possible before treatment is started due to concerns of increasing drug resistance. Resistance among the parasites has developed to several antimalarial medications; for example, chloroquine-resistant P. falciparum has spread to most malarial areas, and resistance to artemisinin has become a problem in some parts of Southeast Asia. The disease is widespread in the tropical and subtropical regions that exist in a broad band around the equator. This includes much of sub-Saharan Africa, Asia, and Latin America. In 2022, some 249 million cases of malaria worldwide resulted in an estimated 608,000 deaths, with 80 percent being five years old or less. Around 95% of the cases and deaths occurred in sub-Saharan Africa. Rates of disease decreased from 2010 to 2014, but increased from 2015 to 2021. According to UNICEF, nearly every minute, a child under five died of malaria in 2021, and "many of these deaths are preventable and treatable". Malaria is commonly associated with poverty and has a significant negative effect on economic development. In Africa, it is estimated to result in losses of US$12 billion a year due to increased healthcare costs, lost ability to work, and adverse effects on tourism. The malaria caseload in India was slashed by 69 per cent from 6.4 million (64 lakh) in 2017 to two million (20 lakh) in 2023. Similarly, the estimated malaria deaths decreased from 11,100 to 3,500 (a 68-per cent decrease) in the same period. Etymology The term malaria originates from Medieval 'bad air', a part of miasma theory; the disease was formerly called ague or marsh fever due to its association with swamps and marshland. The term appeared in English at least as early as 1768. Malaria was once common in most of Europe and North America, where it is no longer endemic, though imported cases do occur. The scientific study of malaria is called malariology. Signs and symptoms Adults with malaria tend to experience chills and fever—classically in periodic intense bouts lasting around six hours, followed by a period of sweating and fever relief—as well as headache, fatigue, abdominal discomfort, and muscle pain. Children tend to have more general symptoms: fever, cough, vomiting, and diarrhea. Initial manifestations of the disease—common to all malaria species—are similar to flu-like symptoms, and can resemble other conditions such as sepsis, gastroenteritis, and viral diseases. The presentation may include headache, fever, shivering, joint pain, vomiting, hemolytic anemia, jaundice, hemoglobin in the urine, retinal damage, and convulsions. The classic symptom of malaria is paroxysm—a cyclical occurrence of sudden coldness followed by shivering and then fever and sweating, occurring every two days (tertian fever) in P. vivax and P. ovale infections, and every three days (quartan fever) for P. malariae. P. falciparum infection can cause recurrent fever every 36–48 hours, or a less pronounced and almost continuous fever. Symptoms typically begin 10–15 days after the initial mosquito bite, but can occur as late as several months after infection with some P. vivax strains. Travellers taking preventative malaria medications may develop symptoms once they stop taking the drugs. Severe malaria is usually caused by P. falciparum (often referred to as falciparum malaria). Symptoms of falciparum malaria arise 9–30 days after infection. Individuals with cerebral malaria frequently exhibit neurological symptoms, including abnormal posturing, nystagmus, conjugate gaze palsy (failure of the eyes to turn together in the same direction), opisthotonus, seizures, or coma. Diagnosis based on skin odor profiles Humans emanate a large range of smells. Studies have been conducted on how to detect human malaria infections through volatile compounds from the skin - suggesting that volatile biomarkers may be a reliable source for the detection of infection, including those asymptomatic. Using skin body odor profiles can be efficient in diagnosing global populations, and the screening and monitoring of infection to officially eradicate malaria. Research findings have predominantly relied on chemical explanations to explain the differences in attractiveness among humans based on distinct odor profiles. The existence of volatile compounds, like fatty acids, and lactic acid is an essential reason on why some individuals are more appealing to mosquitos than others. Volatile compounds Kanika Khanna, a postdoctoral scholar at the University of California, Berkeley studying the structural basis of membrane manipulation and cell-cell fusion by bacterial pathogens, discusses studies that determine how odor profiles can be used to diagnose the disease. Within the study, samples of volatile compounds from around 400 children within schools in Western Kenya were collected - to identify asymptomatic infections. These biomarkers have been established as a non-invasive way to detect malarial infections. In addition, these volatile compounds were heavily detected by mosquito antennae as an attractant, making the children more vulnerable to the bite of the mosquitos. Fatty acids Fatty acids have been identified as an attractive compound for mosquitoes, they are typically found in volatile emissions from the skin. These fatty acids that produce body odor profiles originate from the metabolism of glycerol, lactic acid, amino acids, and lipids - through the action of bacteria found within the skin. They create a “chemical signature” for the mosquitoes to locate a potential host, humans in particular. Lactic acid Lactic acid, a naturally produced levorotatory isomer, has been titled an attractant of mosquitoes for a long time. Lactic acid is predominantly produced by eccrine-sweat glands, creating a large amount of sweat on the surface of the skin. Due to the high levels of lactic acid released from the human body, it has been hypothesized to represent a specific human host-recognition cue for anthropophilic (attracted to humans) mosquitoes. Pungent foot odor Most studies use human odors as stimuli to attract host seeking mosquitoes and have reported a strong and significant attractive effect. The studies have found human odor samples very effective in attracting mosquitoes. Foot odors have been demonstrated to have the highest attractiveness to anthropophilic mosquitoes. Some of these studies have included traps that had been baited with nylon socks previously worn by human participants and were deemed efficient in catching adult mosquitos. Foot odors have high numbers of volatile compounds, which in turn elicit an olfactory response from mosquitoes. Complications Malaria has several serious complications, including the development of respiratory distress, which occurs in up to 25% of adults and 40% of children with severe P. falciparum malaria. Possible causes include respiratory compensation of metabolic acidosis, noncardiogenic pulmonary oedema, concomitant pneumonia, and severe anaemia. Although rare in young children with severe malaria, acute respiratory distress syndrome occurs in 5–25% of adults and up to 29% of pregnant women. Coinfection of HIV with malaria increases mortality. Kidney failure is a feature of blackwater fever, where haemoglobin from lysed red blood cells leaks into the urine. Infection with P. falciparum may result in cerebral malaria, a form of severe malaria that involves encephalopathy. It is associated with retinal whitening, which may be a useful clinical sign in distinguishing malaria from other causes of fever. An enlarged spleen, enlarged liver or both of these, severe headache, low blood sugar, and haemoglobin in the urine with kidney failure may occur. Complications may include spontaneous bleeding, coagulopathy, and shock.. Cerebral malaria can bring about death within forty-eight hours of the first symptoms of the infection being evident. Malaria during pregnancy can cause stillbirths, infant mortality, miscarriage, and low birth weight, particularly in P. falciparum infection, but also with P. vivax. Cause Malaria is caused by infection with parasites in the genus Plasmodium. In humans, malaria is caused by six Plasmodium species: P. falciparum, P. malariae, P. ovale curtisi, P. ovale wallikeri, P. vivax and P. knowlesi. Among those infected, P. falciparum is the most common species identified (~75%) followed by P. vivax (~20%). Although P. falciparum traditionally accounts for the majority of deaths, recent evidence suggests that P. vivax malaria is associated with potentially life-threatening conditions about as often as with a diagnosis of P. falciparum infection. P. vivax proportionally is more common outside Africa. Some cases have been documented of human infections with several species of Plasmodium from higher apes, but except for P. knowlesi—a zoonotic species that causes malaria in macaques—these are mostly of limited public health importance. The Anopheles mosquitos initially get infected by Plasmodium by taking a blood meal from a previously Plasmodium infected person or animal. Parasites are then typically introduced by the bite of an infected Anopheles mosquito. Some of these inoculated parasites, called "sporozoites", probably remain in the skin, but others travel in the bloodstream to the liver, where they invade hepatocytes. They grow and divide in the liver for 2–10 days, with each infected hepatocyte eventually harboring up to 40,000 parasites. The infected hepatocytes break down, releasing these invasive Plasmodium cells, called "merozoites", into the bloodstream. In the blood, the merozoites rapidly invade individual red blood cells, replicating over 24–72 hours to form 16–32 new merozoites. The infected red blood cell lyses, and the new merozoites infect new red blood cells, resulting in a cycle that continuously amplifies the number of parasites in an infected person. Over rounds of this infection cycle, a small portion of parasites do not replicate, but instead develop into early sexual stage parasites called male and female "gametocytes". These gametocytes develop in the bone marrow for 11 days, then return to the blood circulation to await uptake by the bite of another mosquito. Once inside a mosquito, the gametocytes undergo sexual reproduction, and eventually form daughter sporozoites that migrate to the mosquito's salivary glands to be injected into a new host when the mosquito bites. The liver infection causes no symptoms; all symptoms of malaria result from the infection of red blood cells. Symptoms develop once there are more than around 100,000 parasites per milliliter of blood. Many of the symptoms associated with severe malaria are caused by the tendency of P. falciparum to bind to blood vessel walls, resulting in damage to the affected vessels and surrounding tissue. Parasites sequestered in the blood vessels of the lung contribute to respiratory failure. In the brain, they contribute to coma. In the placenta they contribute to low birthweight and preterm labor, and increase the risk of abortion and stillbirth. The destruction of red blood cells during infection often results in anemia, exacerbated by reduced production of new red blood cells during infection. Only female mosquitoes feed on blood; male mosquitoes feed on plant nectar and do not transmit the disease. Females of the mosquito genus Anopheles prefer to feed at night. They usually start searching for a meal at dusk, and continue through the night until they succeed. However, in Africa, due to the extensive use of bed nets, they began to bite earlier, before bed-net time. Malaria parasites can also be transmitted by blood transfusions, although this is rare. Recurrent malaria Symptoms of malaria can recur after varying symptom-free periods. Depending upon the cause, recurrence can be classified as either recrudescence, relapse, or reinfection. Recrudescence is when symptoms return after a symptom-free period due to failure to remove blood-stage parasites by adequate treatment. Relapse is when symptoms reappear after the parasites have been eliminated from the blood but have persisted as dormant hypnozoites in liver cells. Relapse commonly occurs between 8 and 24 weeks after the initial symptoms and is often seen in P. vivax and P. ovale infections. P. vivax malaria cases in temperate areas often involve overwintering by hypnozoites, with relapses beginning the year after the mosquito bite. Reinfection means that parasites were eliminated from the entire body but new parasites were then introduced. Reinfection cannot readily be distinguished from relapse and recrudescence, although recurrence of infection within two weeks of treatment ending is typically attributed to treatment failure. People may develop some immunity when exposed to frequent infections. Pathophysiology Malaria infection develops via two phases: one that involves the liver (exoerythrocytic phase), and one that involves red blood cells, or erythrocytes (erythrocytic phase). When an infected mosquito pierces a person's skin to take a blood meal, sporozoites in the mosquito's saliva enter the bloodstream and migrate to the liver where they infect hepatocytes, multiplying asexually and asymptomatically for a period of 8–30 days. After a potential dormant period in the liver, these organisms differentiate to yield thousands of merozoites, which, following rupture of their host cells, escape into the blood and infect red blood cells to begin the erythrocytic stage of the life cycle. The parasite escapes from the liver undetected by wrapping itself in the cell membrane of the infected host liver cell. The parasites multiply asexually within red blood cells, periodically breaking out to infect new ones. This repeated cycle results in synchronized waves of merozoites escaping and invading red blood cells, which cause the characteristic fever patterns. Some P. vivax sporozoites do not immediately develop into exoerythrocytic-phase merozoites, but instead, produce hypnozoites that remain dormant for periods ranging from several months (7–10 months is typical) to several years. After a period of dormancy, they reactivate and produce merozoites. Hypnozoites are responsible for long incubation and late relapses in P. vivax infections, although their existence in P. ovale is uncertain. The parasite is relatively protected from attack by the body's immune system because for most of its human life cycle it resides within the liver and blood cells and is relatively invisible to immune surveillance. However, circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P. falciparum parasite displays adhesive proteins on the surface of the infected blood cells, causing the blood cells to stick to the walls of small blood vessels, thereby sequestering the parasite from passage through the general circulation and the spleen. The blockage of the microvasculature causes symptoms such as those in placental malaria. Sequestered red blood cells can breach the blood–brain barrier and cause cerebral malaria. Genetic resistance Due to the high levels of mortality and morbidity caused by malaria—especially the P. falciparum species—it has placed the greatest selective pressure on the human genome in recent history. Several genetic factors provide some resistance to it including sickle cell trait, thalassaemia traits, glucose-6-phosphate dehydrogenase deficiency, and the absence of Duffy antigens on red blood cells. The impact of sickle cell trait on malaria immunity illustrates some evolutionary trade-offs that have occurred because of endemic malaria. Sickle cell trait causes a change in the haemoglobin molecule in the blood. Normally, red blood cells have a very flexible, biconcave shape that allows them to move through narrow capillaries; however, when the modified haemoglobin S molecules are exposed to low amounts of oxygen, or crowd together due to dehydration, they can stick together forming strands that cause the cell to distort into a curved sickle shape. In these strands, the molecule is not as effective in taking or releasing oxygen, and the cell is not flexible enough to circulate freely. In the early stages of malaria, the parasite can cause infected red cells to sickle, and so they are removed from circulation sooner. This reduces the frequency with which malaria parasites complete their life cycle in the cell. Individuals who are homozygous (with two copies of the abnormal haemoglobin beta allele) have sickle-cell anaemia, while those who are heterozygous (with one abnormal allele and one normal allele) experience resistance to malaria without severe anaemia. Although the shorter life expectancy for those with the homozygous condition would tend to disfavour the trait's survival, the trait is preserved in malaria-prone regions because of the benefits provided by the heterozygous form. Liver dysfunction Liver dysfunction as a result of malaria is uncommon and usually only occurs in those with another liver condition such as viral hepatitis or chronic liver disease. The syndrome is sometimes called malarial hepatitis. While it has been considered a rare occurrence, malarial hepatopathy has seen an increase, particularly in Southeast Asia and India. Liver compromise in people with malaria correlates with a greater likelihood of complications and death. Effects on vaccine response Malaria infection affects the immune responses following vaccination for various diseases. For example, malaria suppresses immune responses to polysaccharide vaccines. A potential solution is to give curative treatment before vaccination in areas where malaria is present. Diagnosis Due to the non-specific nature of malaria symptoms, diagnosis is typically suspected based on symptoms and travel history, then confirmed with a laboratory test to detect the presence of the parasite in the blood (parasitological test). In areas where malaria is common, the World Health Organization (WHO) recommends clinicians suspect malaria in any person who reports having fevers, or who has a current temperature above 37.5 °C without any other obvious cause. Malaria should be suspected in children with signs of anemia: pale palms or a laboratory test showing hemoglobin levels below 8 grams per deciliter of blood. In areas of the world with little to no malaria, the WHO recommends only testing people with possible exposure to malaria (typically travel to a malaria-endemic area) and unexplained fever. In sub-Saharan Africa, testing is low, with only about one in four (28%) of children with a fever receiving medical advice or a rapid diagnostic test in 2021. There was a 10-percentage point gap in testing between the richest and the poorest children (33% vs 23%). Additionally, a greater proportion of children in Eastern and Southern Africa (36%) were tested than in West and Central Africa (21%). According to UNICEF, 61% of children with a fever were taken for advice or treatment from a health facility or provider in 2021. Disparities are also observed by wealth, with an 18 percentage point difference in care-seeking behaviour between children in the richest (71%) and the poorest (53%) households. Malaria is usually confirmed by the microscopic examination of blood films or by antigen-based rapid diagnostic tests (RDT). Microscopy—i.e. examining Giemsa-stained blood with a light microscope—is the gold standard for malaria diagnosis. Microscopists typically examine both a "thick film" of blood, allowing them to scan many blood cells in a short time, and a "thin film" of blood, allowing them to clearly see individual parasites and identify the infecting Plasmodium species. Under typical field laboratory conditions, a microscopist can detect parasites when there are at least 100 parasites per microliter of blood, which is around the lower range of symptomatic infection. Microscopic diagnosis is relatively resource intensive, requiring trained personnel, specific equipment, electricity, and a consistent supply of microscopy slides and stains. In places where microscopy is unavailable, malaria is diagnosed with RDTs, rapid antigen tests that detect parasite proteins in a fingerstick blood sample. A variety of RDTs are commercially available, targeting the parasite proteins histidine rich protein 2 (HRP2, detects P. falciparum only), lactate dehydrogenase, or aldolase. The HRP2 test is widely used in Africa, where P. falciparum predominates. However, since HRP2 persists in the blood for up to five weeks after an infection is treated, an HRP2 test sometimes cannot distinguish whether someone currently has malaria or previously had it. Additionally, some P. falciparum parasites in the Amazon region lack the HRP2 gene, complicating detection. RDTs are fast and easily deployed to places without full diagnostic laboratories. However they give considerably less information than microscopy, and sometimes vary in quality from producer to producer and lot to lot. Serological tests to detect antibodies against Plasmodium from the blood have been developed, but are not used for malaria diagnosis due to their relatively poor sensitivity and specificity. Highly sensitive nucleic acid amplification tests have been developed, but are not used clinically due to their relatively high cost, and poor specificity for active infections. Classification Malaria is classified into either "severe" or "uncomplicated" by the World Health Organization (WHO). It is deemed severe when any of the following criteria are present, otherwise it is considered uncomplicated. Decreased consciousness Significant weakness such that the person is unable to walk Inability to feed Two or more convulsions Low blood pressure (less than 70 mmHg in adults and 50 mmHg in children) Breathing problems Circulatory shock Kidney failure or hemoglobin in the urine Bleeding problems, or hemoglobin less than 50 g/L (5 g/dL) Pulmonary oedema Blood glucose less than 2.2 mmol/L (40 mg/dL) Acidosis or lactate levels of greater than 5 mmol/L A parasite level in the blood of greater than 100,000 per microlitre (μL) in low-intensity transmission areas, or 250,000 per μL in high-intensity transmission areas Cerebral malaria is defined as a severe P. falciparum-malaria presenting with neurological symptoms, including coma (with a Glasgow coma scale less than 11, or a Blantyre coma scale less than 3), or with a coma that lasts longer than 30 minutes after a seizure. Prevention Methods used to prevent malaria include medications, mosquito elimination and the prevention of bites. As of 2023, there are two malaria vaccines, approved for use in children by the WHO: RTS,S and R21. The presence of malaria in an area requires a combination of high human population density, high Anopheles mosquito population density and high rates of transmission from humans to mosquitoes and from mosquitoes to humans. If any of these is lowered sufficiently, the parasite eventually disappears from that area, as happened in North America, Europe, and parts of the Middle East. However, unless the parasite is eliminated from the whole world, it could re-establish if conditions revert to a combination that favors the parasite's reproduction. Furthermore, the cost per person of eliminating anopheles mosquitoes rises with decreasing population density, making it economically unfeasible in some areas. Prevention of malaria may be more cost-effective than treatment of the disease in the long run, but the initial costs required are out of reach of many of the world's poorest people. There is a wide difference in the costs of control (i.e. maintenance of low endemicity) and elimination programs between countries. For example, in China—whose government in 2010 announced a strategy to pursue malaria elimination in the Chinese provinces—the required investment is a small proportion of public expenditure on health. In contrast, a similar programme in Tanzania would cost an estimated one-fifth of the public health budget. In 2021, the World Health Organization confirmed that China has eliminated malaria. In 2023, the World Health Organization confirmed that Azerbaijan, Tajikistan, and Belize have eliminated malaria. In areas where malaria is common, children under five years old often have anaemia, which is sometimes due to malaria. Giving children with anaemia in these areas preventive antimalarial medication improves red blood cell levels slightly but does not affect the risk of death or need for hospitalisation. Mosquito control Vector control refers to methods used to decrease malaria by reducing the levels of transmission by mosquitoes. For individual protection, the most effective insect repellents are based on DEET or picaridin. However, there is insufficient evidence that mosquito repellents can prevent malaria infection. Insecticide-treated nets (ITNs) and indoor residual spraying (IRS) are effective, have been commonly used to prevent malaria, and their use has contributed significantly to the decrease in malaria in the 21st century. ITNs and IRS may not be sufficient to eliminate the disease, as these interventions depend on how many people use nets, how many gaps in insecticide there are (low coverage areas), if people are not protected when outside of the home, and an increase in mosquitoes that are resistant to insecticides. Modifications to people's houses to prevent mosquito exposure may be an important long term prevention measure. Insecticide-treated nets Mosquito nets help keep mosquitoes away from people and reduce infection rates and transmission of malaria. Nets are not a perfect barrier and are often treated with an insecticide designed to kill the mosquito before it has time to find a way past the net. Insecticide-treated nets (ITNs) are estimated to be twice as effective as untreated nets and offer greater than 70% protection compared with no net. Between 2000 and 2008, the use of ITNs saved the lives of an estimated 250,000 infants in Sub-Saharan Africa. According to UNICEF, only 36% of households had sufficient ITNs for all household members in 2019. In 2000, 1.7 million (1.8%) African children living in areas of the world where malaria is common were protected by an ITN. That number increased to 20.3 million (18.5%) African children using ITNs in 2007, leaving 89.6 million children unprotected and to 68% African children using mosquito nets in 2015. The percentage of children sleeping under ITNs in sub-Saharan Africa increased from less than 40% in 2011 to over 50% in 2021. Most nets are impregnated with pyrethroids, a class of insecticides with low toxicity. They are most effective when used from dusk to dawn. It is recommended to hang a large "bed net" above the center of a bed and either tuck the edges under the mattress or make sure it is large enough such that it touches the ground. ITNs are beneficial towards pregnancy outcomes in malaria-endemic regions in Africa but more data is needed in Asia and Latin America. In areas of high malaria resistance, piperonyl butoxide (PBO) combined with pyrethroids in mosquito netting is effective in reducing malaria infection rates. Questions remain concerning the durability of PBO on nets as the impact on mosquito mortality was not sustained after twenty washes in experimental trials. UNICEF notes that the use of insecticide-treated nets has been increased since 2000 through accelerated production, procurement and delivery, stating that "over 2.5 billion ITNs have been distributed globally since 2004, with 87% (2.2 billion) distributed in sub-Saharan Africa. In 2021, manufacturers delivered about 220 million ITNs to malaria endemic countries, a decrease of 9 million ITNs compared with 2020 and 33 million less than were delivered in 2019". As of 2021, 66% of households in sub-Saharan Africa had ITNs, with figures "ranging from 31 per cent in Angola in 2016 to approximately 97 per cent in Guinea-Bissau in 2019". Slightly more than half of the households with an ITN had enough of them to protect all members of the household, however. Indoor residual spraying Indoor residual spraying is the spraying of insecticides on the walls inside a home. After feeding, many mosquitoes rest on a nearby surface while digesting the bloodmeal, so if the walls of houses have been coated with insecticides, the resting mosquitoes can be killed before they can bite another person and transfer the malaria parasite. As of 2006, the World Health Organization recommends 12 insecticides in IRS operations, including DDT and the pyrethroids cyfluthrin and deltamethrin. This public health use of small amounts of DDT is permitted under the Stockholm Convention, which prohibits its agricultural use. One problem with all forms of IRS is insecticide resistance. Mosquitoes affected by IRS tend to rest and live indoors, and due to the irritation caused by spraying, their descendants tend to rest and live outdoors, meaning that they are less affected by the IRS. Communities using insecticide treated nets, in addition to indoor residual spraying with 'non-pyrethroid-like' insecticides found associated reductions in malaria. Additionally, the use of 'pyrethroid-like' insecticides in addition to indoor residual spraying did not result in a detectable additional benefit in communities using insecticide treated nets. Housing modifications Housing is a risk factor for malaria and modifying the house as a prevention measure may be a sustainable strategy that does not rely on the effectiveness of insecticides such as pyrethroids. The physical environment inside and outside the home that may improve the density of mosquitoes are considerations. Examples of potential modifications include how close the home is to mosquito breeding sites, drainage and water supply near the home, availability of mosquito resting sites (vegetation around the home), the proximity to live stock and domestic animals, and physical improvements or modifications to the design of the home to prevent mosquitoes from entering, such as window screens. In addition to installing window screens, house screening measures include screening ceilings, doors, and eaves. In 2021, the World Health Organization's (WHO) Guideline Development Group conditionally recommended screening houses in this manner to reduce malaria transmission. However, the WHO does point out that there are local considerations that need to be addressed when incorporating these techniques. These considerations include the delivery method, maintenance, house design, feasibility, resource needs, and scalability. Several studies have suggested that screening houses can have a significant impact on malaria transmission. Beyond the protective barrier screening provides, it also does not call for daily behavioral changes in the household. Screening eaves can also have a community-level protective effect, ultimately reducing mosquito-biting densities in neighboring houses that do not have this intervention in place. In some cases, studies have used insecticide-treated (e.g., transfluthrin) or untreated netting to deter mosquito entry. One widely used intervention is the In2Care BV EaveTube. In 2021, In2Care BV received funding from the United States Agency for International Development to develop a ventilation tube that would be installed in housing walls. When mosquitoes approach households, the goal is for them to encounter these EaveTubes instead. Inside these EaveTubes is insecticide-treated netting that is lethal to insecticide-resistant mosquitoes. This approach to mosquito control is called the Lethal House Lure method. The WHO is currently evaluating the efficacy of this product for widespread use. Mass drug administration Mass drug administration (MDA) involves the administration of drugs to the entire population of an area regardless of disease status. A 2021 Cochrane review on the use of community administration of ivermectin found that, to date, low quality evidence shows no significant impact on reducing incidence of malaria transmission from the community administration of ivermectin. Mosquito-targeted drug delivery One potential way to reduce the burden of malaria is to target the infection in mosquitoes, before it enters the mammalian host (during sporogeny). Drugs may be used for this purpose which have unacceptable toxicity profiles in humans. For example, aminoquinoline derivates show toxicity in humans, but this has not been shown in mosquitoes. Primaquine is particularly effective against Plasmodium gametocytes. Likewise, pyrroloquinazolinediamines show unacceptable toxicity in mammals, but it is unknown whether this is the case in mosquitoes. Pyronaridine, thiostrepton, and pyrimethamine have been shown to dramatically reduce ookinete formation in P. berghei, while artefenomel, NPC-1161B, and tert-butyl isoquine reduce exflagellation in P. Falciparum. Other mosquito control methods People have tried a number of other methods to reduce mosquito bites and slow the spread of malaria. Efforts to decrease mosquito larvae by decreasing the availability of open water where they develop, or by adding substances to decrease their development, are effective in some locations. Electronic mosquito repellent devices, which make very high-frequency sounds that are supposed to keep female mosquitoes away, have no supporting evidence of effectiveness. There is a low certainty evidence that fogging may have an effect on malaria transmission. Larviciding by hand delivery of chemical or microbial insecticides into water bodies containing low larval distribution may reduce malarial transmission. There is insufficient evidence to determine whether larvivorous fish can decrease mosquito density and transmission in the area. Medications There are a number of medications that can help prevent or interrupt malaria in travellers to places where infection is common. Many of these medications are also used in treatment. In places where Plasmodium is resistant to one or more medications, three medications—mefloquine, doxycycline, or the combination of atovaquone/proguanil (Malarone)—are frequently used for prevention. Doxycycline and the atovaquone/proguanil are better tolerated while mefloquine is taken once a week. Areas of the world with chloroquine-sensitive malaria are uncommon. Antimalarial mass drug administration to an entire population at the same time may reduce the risk of contracting malaria in the population, however the effectiveness of mass drug administration may vary depending on the prevalence of malaria in the area. Other factors such as drug administration plus other protective measures such as mosquito control, the proportion of people treated in the area, and the risk of reinfection with malaria may play a role in the effectiveness of mass drug treatment approaches. The protective effect does not begin immediately, and people visiting areas where malaria exists usually start taking the drugs one to two weeks before they arrive, and continue taking them for four weeks after leaving (except for atovaquone/proguanil, which only needs to be started two days before and continued for seven days afterward). The use of preventive drugs is often not practical for those who live in areas where malaria exists, and their use is usually given only to pregnant women and short-term visitors. This is due to the cost of the drugs, side effects from long-term use, and the difficulty in obtaining antimalarial drugs outside of wealthy nations. During pregnancy, medication to prevent malaria has been found to improve the weight of the baby at birth and decrease the risk of anaemia in the mother. The use of preventive drugs where malaria-bearing mosquitoes are present may encourage the development of partial resistance. Giving antimalarial drugs to infants through intermittent preventive therapy can reduce the risk of having malaria infection, hospital admission, and anaemia. Mefloquine is more effective than sulfadoxine-pyrimethamine in preventing malaria for HIV-negative pregnant women. Cotrimoxazole is effective in preventing malaria infection and reduce the risk of getting anaemia in HIV-positive women. Giving Dihydroartemisinin/piperaquine and mefloquine in addition to the daily cotrimoxazole to HIV-positive pregnant women seem to be more efficient in preventing malaria infection than cotrimoxazole alone. Prompt treatment of confirmed cases with artemisinin-based combination therapies (ACTs) may also reduce transmission. Research on malaria vaccines Malaria vaccines have been another goal of research. The first promising studies demonstrating the potential for a malaria vaccine were performed in 1967 by immunising mice with live, radiation-attenuated sporozoites, which provided significant protection to the mice upon subsequent injection with normal, viable sporozoites. Since the 1970s, there has been considerable progress in developing similar vaccination strategies for humans. In 2013, WHO and the malaria vaccine funders group set a goal to develop vaccines designed to interrupt malaria transmission with malaria eradication's long-term goal. The first vaccine, called RTS,S, was approved by European regulators in 2015. As of 2023, two malaria vaccines have been licensed for use. Other approaches to combat malaria may require investing more in research and greater primary health care. Continuing surveillance will also be important to prevent the return of malaria in countries where the disease has been eliminated. As of 2019 it is undergoing pilot trials in 3 sub-Saharan African countries—Ghana, Kenya and Malawi—as part of the WHO's Malaria Vaccine Implementation Programme (MVIP). Immunity (or, more accurately, tolerance) to P. falciparum malaria does occur naturally, but only in response to years of repeated infection. An individual can be protected from a P. falciparum infection if they receive about a thousand bites from mosquitoes that carry a version of the parasite rendered non-infective by a dose of X-ray irradiation. The highly polymorphic nature of many P. falciparum proteins results in significant challenges to vaccine design. Vaccine candidates that target antigens on gametes, zygotes, or ookinetes in the mosquito midgut aim to block the transmission of malaria. These transmission-blocking vaccines induce antibodies in the human blood; when a mosquito takes a blood meal from a protected individual, these antibodies prevent the parasite from completing its development in the mosquito. Other vaccine candidates, targeting the blood-stage of the parasite's life cycle, have been inadequate on their own. For example, SPf66 was tested extensively in areas where the disease was common in the 1990s, but trials showed it to be insufficiently effective. As of 2020, the RTS,S vaccine has been shown to reduce the risk of malaria by about 40% in children in Africa. A preprint study of the R21 vaccine has shown 77% vaccine efficacy. In 2021, researchers from the University of Oxford reported findings from a Phase IIb trial of a candidate malaria vaccine, R21/Matrix-M, which demonstrated efficacy of 77% over 12-months of follow-up. This vaccine is the first to meet the World Health Organization's Malaria Vaccine Technology Roadmap goal of a vaccine with at least 75% efficacy. Germany-based BioNTECH SE is developing an mRNA-based malaria vaccine BN165 which has recently initiated a Phase 1 study [clinicaltrials.gov identifier: NCT05581641] in December 2022. The vaccine, based on the circumsporozite protein (CSP) is being tested in adults aged 18–55 yrs at 3 dose levels to select a safe and tolerable dose of a three-dose schedule. Unlike GSK's RTS,S (AS01) and Serum Institute of India's R21/MatrixM, BNT-165 is being studied in adult age groups meaning it could be developed for Western travelers as well as those living in endemic countries. For the travelers profile, a recent commercial assessment forecast potential gross revenues of BNT-165 at $479m (2030) 5-yrs post launch, POS-adjusted revenues. Others Community participation and health education strategies promoting awareness of malaria and the importance of control measures have been successfully used to reduce the incidence of malaria in some areas of the developing world. Recognising the disease in the early stages can prevent it from becoming fatal. Education can also inform people to cover over areas of stagnant, still water, such as water tanks that are ideal breeding grounds for the parasite and mosquito, thus cutting down the risk of the transmission between people. This is generally used in urban areas where there are large centers of population in a confined space and transmission would be most likely in these areas. Intermittent preventive therapy is another intervention that has been used successfully to control malaria in pregnant women and infants, and in preschool children where transmission is seasonal. Treatment Malaria is treated with antimalarial medications; the ones used depend on the type and severity of the disease. While medications against fever are commonly used, their effects on outcomes are not clear. Providing free antimalarial drugs to households may reduce childhood deaths when used appropriately. Programmes which presumptively treat all causes of fever with antimalarial drugs may lead to overuse of antimalarials and undertreat other causes of fever. Nevertheless, the use of malaria rapid-diagnostic kits can help to reduce over-usage of antimalarials. Uncomplicated malaria Simple or uncomplicated malaria may be treated with oral medications. Artemisinin drugs are effective and safe in treating uncomplicated malaria. Artemisinin in combination with other antimalarials (known as artemisinin-combination therapy, or ACT) is about 90% effective when used to treat uncomplicated malaria. The most effective treatment for P. falciparum infection is the use of ACT, which decreases resistance to any single drug component. Artemether-lumefantrine (six-dose regimen) is more effective than the artemether-lumefantrine (four-dose regimen) or other regimens not containing artemisinin derivatives in treating falciparum malaria. Another recommended combination is dihydroartemisinin and piperaquine. Artemisinin-naphthoquine combination therapy showed promising results in treating falciparum malaria but more research is needed to establish its efficacy as a reliable treatment. Artesunate plus mefloquine performs better than mefloquine alone in treating uncomplicated falciparum malaria in low transmission settings. Atovaquone-proguanil is effective against uncomplicated falciparum with a possible failure rate of 5% to 10%; the addition of artesunate may reduce failure rate. Azithromycin monotherapy or combination therapy has not shown effectiveness in treating Plasmodium falciparum or Plasmodium vivax malaria. Amodiaquine plus sulfadoxine-pyrimethamine may achieve less treatment failures when compared to sulfadoxine-pyrimethamine alone in uncomplicated falciparum malaria. There is insufficient data on chlorproguanil-dapsone in treating uncomplicated falciparum malaria. The addition of primaquine with artemisinin-based combination therapy for falciparum malaria reduces its transmission at day 3-4 and day 8 of infection. Sulfadoxine-pyrimethamine plus artesunate is better than sulfadoxine-pyrimethamine plus amodiaquine in controlling treatment failure at day 28. However, the latter is better than the former in reducing gametocytes in blood at day 7. Infection with P. vivax, P. ovale or P. malariae usually does not require hospitalisation. Treatment of P. vivax malaria requires both elimination of the parasite in the blood with chloroquine or with artemisinin-based combination therapy and clearance of parasites from the liver with an 8-aminoquinoline agent such as primaquine or tafenoquine. These two drugs act against blood stages as well, the extent to which they do so still being under investigation. To treat malaria during pregnancy, the WHO recommends the use of quinine plus clindamycin early in the pregnancy (1st trimester), and ACT in later stages (2nd and 3rd trimesters). There is limited safety data on the antimalarial drugs in pregnancy. Severe and complicated malaria Cases of severe and complicated malaria are almost always caused by infection with P. falciparum. The other species usually cause only febrile disease. Severe and complicated malaria cases are medical emergencies since mortality rates are high (10% to 50%). Recommended treatment for severe malaria is the intravenous use of antimalarial drugs. For severe malaria, parenteral artesunate was superior to quinine in both children and adults. In another systematic review, artemisinin derivatives (artemether and arteether) were as efficacious as quinine in the treatment of cerebral malaria in children. Treatment of severe malaria involves supportive measures that are best done in a critical care unit. This includes the management of high fevers and the seizures that may result from it. It also includes monitoring for poor breathing effort, low blood sugar, and low blood potassium. Artemisinin derivatives have the same or better efficacy than quinolones in preventing deaths in severe or complicated malaria. Quinine loading dose helps to shorten the duration of fever and increases parasite clearance from the body. There is no difference in effectiveness when using intrarectal quinine compared to intravenous or intramuscular quinine in treating uncomplicated/complicated falciparum malaria. There is insufficient evidence for intramuscular arteether to treat severe malaria. The provision of rectal artesunate before transfer to hospital may reduce the rate of death for children with severe malaria. In children with malaria and concomitant hypoglycaemia, sublingual administration of glucose appears to result in better increases in blood sugar after 20 minutes when compared to oral administration, based on very limited data. Cerebral malaria is the form of severe and complicated malaria with the worst neurological symptoms. There is insufficient data on whether osmotic agents such as mannitol or urea are effective in treating cerebral malaria. Routine phenobarbitone in cerebral malaria is associated with fewer convulsions but possibly more deaths. There is no evidence that steroids would bring treatment benefits for cerebral malaria. Managing cerebral malaria Cerebral malaria usually makes a patient comatose. If the cause of the coma is in doubt, testing for other locally prevalent causes of encephalopathy (bacterial, viral or fungal infection) should be carried out. In areas where there is a high prevalence of malaria infection (e.g. tropical region) treatment can start without testing first. To manage the cerebral malaria when confirmed the following can be done: People who are in coma should be given meticulous nursing care ( monitor vital signs, turn patient every 2 hours, avoid lying the patient in a wet bed etc.) A sterile urethral catheter should be inserted to help with urinating To aspirate stomach content, a sterile nasogastric tube should be inserted. In the occasion of convulsions, a slow intravenous injection of benzodiazepine is administered. There is insufficient evidence to show that blood transfusion is useful in either reducing deaths for children with severe anaemia or in improving their haematocrit in one month. There is insufficient evidence that iron chelating agents such as deferoxamine and deferiprone improve outcomes of those with malaria falciparum infection. Monoclonal antibodies A 2022 clinical trial shows that a monoclonal antibody mAb L9LS offers protection against malaria. It binds the Plasmodium falciparum circumsporozoite protein (CSP-1), essential to disease, and makes it ineffective. Resistance Drug resistance poses a growing problem in 21st-century malaria treatment. In the 2000s (decade), malaria with partial resistance to artemisins emerged in Southeast Asia. Resistance is now common against all classes of antimalarial drugs apart from artemisinins. Treatment of resistant strains became increasingly dependent on this class of drugs. The cost of artemisinins limits their use in the developing world. Malaria strains found on the Cambodia–Thailand border are resistant to combination therapies that include artemisinins, and may, therefore, be untreatable. Exposure of the parasite population to artemisinin monotherapies in subtherapeutic doses for over 30 years and the availability of substandard artemisinins likely drove the selection of the resistant phenotype. Resistance to artemisinin has been detected in Cambodia, Myanmar, Thailand, and Vietnam, and there has been emerging resistance in Laos. Resistance to the combination of artemisinin and piperaquine was first detected in 2013 in Cambodia, and by 2019 had spread across Cambodia and into Laos, Thailand and Vietnam (with up to 80 percent of malaria parasites resistant in some regions). There is insufficient evidence in unit packaged antimalarial drugs in preventing treatment failures of malaria infection. However, if supported by training of healthcare providers and patient information, there is improvement in compliance of those receiving treatment. Prognosis When properly treated, people with malaria can usually expect a complete recovery. However, severe malaria can progress extremely rapidly and cause death within hours or days. In the most severe cases of the disease, fatality rates can reach 20%, even with intensive care and treatment. Over the longer term, developmental impairments have been documented in children who have had episodes of severe malaria. Chronic infection without severe disease can occur in an immune-deficiency syndrome associated with a decreased responsiveness to Salmonella bacteria and the Epstein–Barr virus. During childhood, malaria causes anaemia during a period of rapid brain development, and also direct brain damage resulting from cerebral malaria. Some survivors of cerebral malaria have an increased risk of neurological and cognitive deficits, behavioural disorders, and epilepsy. Malaria prophylaxis was shown to improve cognitive function and school performance in clinical trials when compared to placebo groups. Epidemiology The WHO estimates that in 2021 there were 247 million total cases of malaria resulting in 619,000 deaths. Children under five years old are the most affected, accounting for 67% of malaria deaths worldwide in 2019. About 125 million pregnant women are at risk of infection each year; in Sub-Saharan Africa, maternal malaria is associated with up to 200,000 estimated infant deaths yearly. Since 2015, the WHO European Region has been free of malaria. The last country to report an indigenous malaria case was Tajikistan in 2014. There are about 1300–1500 malaria cases per year in the United States. The United States eradicated malaria as a major public health concern in 1951, though small outbreaks persist. Locally acquired mosquito-borne malaria occurred in the United States in 2003, when eight cases of locally acquired P. vivax malaria were identified in Florida, and again in May 2023, in four cases, as well as one case in Texas, and in August in one case in Maryland. About 900 people died from the disease in Europe between 1993 and 2003. Both the global incidence of disease and resulting mortality have declined in recent years. According to the WHO and UNICEF, deaths attributable to malaria in 2015 were reduced by 60% from a 2000 estimate of 985,000, largely due to the widespread use of insecticide-treated nets and artemisinin-based combination therapies. Between 2000 and 2019, malaria mortality rates among all ages halved from about 30 to 13 per 100,000 population at risk. During this period, malaria deaths among children under five also declined by nearly half (47%) from 781,000 in 2000 to 416,000 in 2019. Malaria is presently endemic in a broad band around the equator, in areas of the Americas, many parts of Asia, and much of Africa; in Sub-Saharan Africa, 85–90% of malaria fatalities occur. An estimate for 2009 reported that countries with the highest death rate per 100,000 of population were Ivory Coast (86.15), Angola (56.93) and Burkina Faso (50.66). A 2010 estimate indicated the deadliest countries per population were Burkina Faso, Mozambique and Mali. The Malaria Atlas Project aims to map global levels of malaria, providing a way to determine the global spatial limits of the disease and to assess disease burden. This effort led to the publication of a map of P. falciparum endemicity in 2010 and an update in 2019. As of 2021, 84 countries have endemic malaria. The geographic distribution of malaria within large regions is complex, and malaria-afflicted and malaria-free areas are often found close to each other. Malaria is prevalent in tropical and subtropical regions because of rainfall, consistent high temperatures and high humidity, along with stagnant waters where mosquito larvae readily mature, providing them with the environment they need for continuous breeding. In drier areas, outbreaks of malaria have been predicted with reasonable accuracy by mapping rainfall. Malaria is more common in rural areas than in cities. For example, several cities in the Greater Mekong Subregion of Southeast Asia are essentially malaria-free, but the disease is prevalent in many rural regions, including along international borders and forest fringes. In contrast, malaria in Africa is present in both rural and urban areas, though the risk is lower in the larger cities. Climate change Climate change is likely to affect malaria transmission, but the degree of effect and the areas affected is uncertain. Greater rainfall in certain areas of India, and following an El Niño event is associated with increased mosquito numbers. Since 1900 there has been substantial change in temperature and rainfall over Africa. However, factors that contribute to how rainfall results in water for mosquito breeding are complex, incorporating the extent to which it is absorbed into soil and vegetation for example, or rates of runoff and evaporation. Recent research has provided a more in-depth picture of conditions across Africa, combining a malaria climatic suitability model with a continental-scale model representing real-world hydrological processes. History Although the parasite responsible for P. falciparum malaria has been in existence for 50,000–100,000 years, the population size of the parasite did not increase until about 10,000 years ago, concurrently with advances in agriculture and the development of human settlements. Close relatives of the human malaria parasites remain common in chimpanzees. Some evidence suggests that the P. falciparum malaria may have originated in gorillas.
Biology and health sciences
Illness and injury
null
20424
https://en.wikipedia.org/wiki/Lunar%20phase
Lunar phase
A lunar phase or Moon phase is the apparent shape of the Moon's directly sunlit portion as viewed from the Earth. Because the Moon is tidally locked with the Earth, the same hemisphere is always facing the Earth. In common usage, the four major phases are the new moon, the first quarter, the full moon and the last quarter; the four minor phases are waxing crescent, waxing gibbous, waning gibbous, and waning crescent. A lunar month is the time between successive recurrences of the same phase: due to the eccentricity of the Moon's orbit, this duration is not perfectly constant but averages about 29.5 days. The appearance of the Moon (its phase) gradually changes over a lunar month as the relative orbital positions of the Moon around Earth, and Earth around the Sun, shift. The visible side of the Moon is sunlit to varying extents, depending on the position of the Moon in its orbit, with the sunlit portion varying from 0% (at new moon) to nearly 100% (at full moon). Phases of the Moon There are four principal (primary, or major) lunar phases: the new moon, first quarter, full moon, and last quarter (also known as third or final quarter), when the Moon's ecliptic longitude is at an angle to the Sun (as viewed from the center of the Earth) of 0°, 90°, 180°, and 270° respectively. Each of these phases appears at slightly different times at different locations on Earth, and tabulated times are therefore always geocentric (calculated for the Earth's center). Between the principal phases are intermediate phases, during which the apparent shape of the illuminated Moon is either crescent or gibbous. On average, the intermediate phases last one-quarter of a synodic month, or 7.38 days. The term is used for an intermediate phase when the Moon's apparent shape is thickening, from new to a full moon; and when the shape is thinning. The duration from full moon to new moon (or new moon to full moon) varies from approximately to about . Due to lunar motion relative to the meridian and the ecliptic, in Earth's northern hemisphere: A new moon appears highest at the summer solstice and lowest at the winter solstice. A first-quarter moon appears highest at the spring equinox and lowest at the autumn equinox. A full moon appears highest at the winter solstice and lowest at the summer solstice. A last-quarter moon appears highest at the autumn equinox and lowest at the spring equinox. Non-Western cultures may use a different number of lunar phases; for example, traditional Hawaiian culture has a total of 30 phases (one per day). Lunar libration As seen from Earth, the Moon's eccentric orbit makes it both slightly change its apparent size, and to be seen from slightly different angles. The effect is subtle to the naked eye, from night to night, yet somewhat obvious in time-lapse photography. Lunar libration causes part of the back side of the Moon to be visible to a terrestrial observer some of the time. Because of this, around 59% of the Moon's surface has been imaged from the ground. Principal and intermediate phases of the Moon Waxing and waning When the Sun and Moon are aligned on the same side of the Earth (conjunct), the Moon is "new", and the side of the Moon facing Earth is not illuminated by the Sun. As the Moon waxes (the amount of illuminated surface as seen from Earth increases), the lunar phases progress through the new moon, crescent moon, first-quarter moon, gibbous moon, and full moon phases. The Moon then wanes as it passes through the gibbous moon, third-quarter moon, and crescent moon phases, before returning back to new moon. The terms old moon and new moon are not interchangeable. The "old moon" is a waning sliver (which eventually becomes undetectable to the naked eye) until the moment it aligns with the Sun and begins to wax, at which point it becomes new again. Half moon is often used to mean the first- and third-quarter moons, while the term quarter refers to the extent of the Moon's cycle around the Earth, not its shape. When an illuminated hemisphere is viewed from a certain angle, the portion of the illuminated area that is visible will have a two-dimensional shape as defined by the intersection of an ellipse and circle (in which the ellipse's major axis coincides with the circle's diameter). If the half-ellipse is convex with respect to the half-circle, then the shape will be gibbous (bulging outwards), whereas if the half-ellipse is concave with respect to the half-circle, then the shape will be a crescent. When a crescent moon occurs, the phenomenon of earthshine may be apparent, where the night side of the Moon dimly reflects indirect sunlight reflected from Earth. Orientation by latitude In the Northern Hemisphere, if the left side of the Moon is dark, then the bright part is thickening, and the Moon is described as waxing (shifting toward full moon). If the right side of the Moon is dark, then the bright part is thinning, and the Moon is described as waning (past full and shifting toward new moon). Assuming that the viewer is in the Northern Hemisphere, the right side of the Moon is the part that is always waxing. (That is, if the right side is dark, the Moon is becoming darker; if the right side is lit, the Moon is getting brighter.) In the Southern Hemisphere, the Moon is observed from a perspective inverted, or rotated 180°, to that of the Northern and to all of the images in this article, so that the opposite sides appear to wax or wane. Closer to the Equator, the lunar terminator will appear horizontal during the morning and evening. Since the above descriptions of the lunar phases only apply at middle or high latitudes, observers moving towards the tropics from northern or southern latitudes will see the Moon rotated anti-clockwise or clockwise with respect to the images in this article. The lunar crescent can open upward or downward, with the "horns" of the crescent pointing up or down, respectively. When the Sun appears above the Moon in the sky, the crescent opens downward; when the Moon is above the Sun, the crescent opens upward. The crescent Moon is most clearly and brightly visible when the Sun is below the horizon, which implies that the Moon must be above the Sun, and the crescent must open upward. This is therefore the orientation in which the crescent Moon is most often seen from the tropics. The waxing and waning crescents look very similar. The waxing crescent appears in the western sky in the evening, and the waning crescent in the eastern sky in the morning. Earthshine When the Moon (seen from Earth) is a thin crescent, Earth (as viewed from the Moon) is almost fully lit by the Sun. Often, the dark side of the Moon is dimly illuminated by indirect sunlight reflected from Earth, but is bright enough to be easily visible from Earth. This phenomenon is called earthshine, sometimes picturesquely described as "the old moon in the new moon's arms" or "the new moon in the old moon's arms". Timekeeping Archaeologists have reconstructed methods of timekeeping that go back to prehistoric times, at least as old as the Neolithic. The natural units for timekeeping used by most historical societies are the day, the solar year and the lunation. The first crescent of the new moon provides a clear and regular marker in time and pure lunar calendars (such as the Islamic Hijri calendar) rely completely on this metric. The fact, however, that a year of twelve lunar months is ten or eleven days shorter than the solar year means that a lunar calendar drifts out of step with the seasons. Lunisolar calendars resolve this issue with a year of thirteen lunar months every few years, or by restarting the count at the first new (or full) moon after the winter solstice. The Sumerian calendar is the first recorded to have used the former method; Chinese calendar uses the latter, despite delaying its start until the second or even third new moon after the solstice. The Hindu calendar, also a lunisolar calendar, further divides the month into two fourteen day periods that mark the waxing moon and the waning moon. The ancient Roman calendar was broadly a lunisolar one; on the decree of Julius Caesar in the first century BCE, Rome changed to a solar calendar of twelve months, each of a fixed number of days except in a leap year. This, the Julian calendar (slightly revised in 1582 to correct the leap year rule), is the basis for the Gregorian calendar that is almost exclusively the civil calendar in use worldwide today. Calculating phase Each of the four intermediate phases lasts approximately seven days (7.38 days on average), but varies ±11.25% due to lunar apogee and perigee. The number of days counted from the time of the new moon is the Moon's "age". Each complete cycle of phases is called a "lunation". The approximate age of the Moon, and hence the approximate phase, can be calculated for any date by calculating the number of days since a known new moon (such as 1 January 1900 or 11 August 1999) and reducing this modulo 29.53059 days (the mean length of a synodic month). The difference between two dates can be calculated by subtracting the Julian day number of one from that of the other, or there are simpler formulae giving (for instance) the number of days since 31 December 1899. However, this calculation assumes a perfectly circular orbit and makes no allowance for the time of day at which the new moon occurred and therefore may be incorrect by several hours. (It also becomes less accurate the larger the difference between the required date and the reference date.) It is accurate enough to use in a novelty clock application showing lunar phase, but specialist usage taking account of lunar apogee and perigee requires a more elaborate calculation. Also, due to lunar libration it is not uncommon to see up to 101% of the full moon or even up to 5% of the lunar backside. Effect of parallax The Earth subtends an angle of about two degrees when seen from the Moon. This means that an observer on Earth who sees the Moon when it is close to the eastern horizon sees it from an angle that is about 2 degrees different from the line of sight of an observer who sees the Moon on the western horizon. The Moon moves about 12 degrees around its orbit per day, so, if these observers were stationary, they would see the phases of the Moon at times that differ by about one-sixth of a day, or 4 hours. But in reality, the observers are on the surface of the rotating Earth, so someone who sees the Moon on the eastern horizon at one moment sees it on the western horizon about 12 hours later. This adds an oscillation to the apparent progression of the lunar phases. They appear to occur more slowly when the Moon is high in the sky than when it is below the horizon. The Moon appears to move jerkily, and the phases do the same. The amplitude of this oscillation is never more than about four hours, which is a small fraction of a month. It does not have any obvious effect on the appearance of the Moon. It does however affect accurate calculations of the times of lunar phases. Misconceptions Orbital period It can be confusing that the Moon's orbital sidereal period is 27.3 days while the phases complete a cycle once every 29.5 days (synodic period). This is due to the Earth's orbit around the Sun. The Moon orbits the Earth 13.4 times a year, but only passes between the Earth and Sun 12.4 times. Eclipses It might be expected that once every month, when the Moon passes between Earth and the Sun during a new moon, its shadow would fall on Earth causing a solar eclipse, but this does not happen every month. Nor is it true that during every full moon, the Earth's shadow falls on the Moon, causing a lunar eclipse. Solar and lunar eclipses are not observed every month because the plane of the Moon's orbit around the Earth is tilted by about 5° with respect to the plane of Earth's orbit around the Sun (the plane of the ecliptic). Thus, when new and full moons occur, the Moon usually lies to the north or south of a direct line through the Earth and Sun. Although an eclipse can only occur when the Moon is either new (solar) or full (lunar), it must also be positioned very near the intersection of Earth's orbital plane about the Sun and the Moon's orbital plane about the Earth (that is, at one of its nodes). This happens about twice per year, and so there are between four and seven eclipses in a calendar year. Most of these eclipses are partial; total eclipses of the Moon or Sun are less frequent. Mechanism The phases are not caused by the Earth's shadow falling on the moon, as some people believe.
Physical sciences
Celestial mechanics
Astronomy
20431
https://en.wikipedia.org/wiki/Momentum
Momentum
In Newtonian mechanics, momentum (: momenta or momentums; more specifically linear momentum or translational momentum) is the product of the mass and velocity of an object. It is a vector quantity, possessing a magnitude and a direction. If is an object's mass and is its velocity (also a vector quantity), then the object's momentum (from Latin pellere "push, drive") is: In the International System of Units (SI), the unit of measurement of momentum is the kilogram metre per second (kg⋅m/s), which is dimensionally equivalent to the newton-second. Newton's second law of motion states that the rate of change of a body's momentum is equal to the net force acting on it. Momentum depends on the frame of reference, but in any inertial frame it is a conserved quantity, meaning that if a closed system is not affected by external forces, its total momentum does not change. Momentum is also conserved in special relativity (with a modified formula) and, in a modified form, in electrodynamics, quantum mechanics, quantum field theory, and general relativity. It is an expression of one of the fundamental symmetries of space and time: translational symmetry. Advanced formulations of classical mechanics, Lagrangian and Hamiltonian mechanics, allow one to choose coordinate systems that incorporate symmetries and constraints. In these systems the conserved quantity is generalized momentum, and in general this is different from the kinetic momentum defined above. The concept of generalized momentum is carried over into quantum mechanics, where it becomes an operator on a wave function. The momentum and position operators are related by the Heisenberg uncertainty principle. In continuous systems such as electromagnetic fields, fluid dynamics and deformable bodies, a momentum density can be defined as momentum per volume (a volume-specific quantity). A continuum version of the conservation of momentum leads to equations such as the Navier–Stokes equations for fluids or the Cauchy momentum equation for deformable solids or fluids. Classical Momentum is a vector quantity: it has both magnitude and direction. Since momentum has a direction, it can be used to predict the resulting direction and speed of motion of objects after they collide. Below, the basic properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations (see multiple dimensions). Single particle The momentum of a particle is conventionally represented by the letter . It is the product of two quantities, the particle's mass (represented by the letter ) and its velocity (): The unit of momentum is the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity is in meters per second then the momentum is in kilogram meters per second (kg⋅m/s). In cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters per second (g⋅cm/s). Being a vector, momentum has magnitude and direction. For example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg⋅m/s due north measured with reference to the ground. Many particles The momentum of a system of particles is the vector sum of their momenta. If two particles have respective masses and , and velocities and , the total momentum is The momenta of more than two particles can be added more generally with the following: A system of particles has a center of mass, a point determined by the weighted sum of their positions: If one or more of the particles is moving, the center of mass of the system will generally be moving as well (unless the system is in pure rotation around it). If the total mass of the particles is , and the center of mass is moving at velocity , the momentum of the system is: This is known as Euler's first law. Relation to force If the net force applied to a particle is constant, and is applied for a time interval , the momentum of the particle changes by an amount In differential form, this is Newton's second law; the rate of change of the momentum of a particle is equal to the instantaneous force acting on it, If the net force experienced by a particle changes as a function of time, , the change in momentum (or impulse ) between times and is Impulse is measured in the derived units of the newton second (1 N⋅s = 1 kg⋅m/s) or dyne second (1 dyne⋅s = 1 g⋅cm/s) Under the assumption of constant mass , it is equivalent to write hence the net force is equal to the mass of the particle times its acceleration. Example: A model airplane of mass 1 kg accelerates from rest to a velocity of 6 m/s due north in 2 s. The net force required to produce this acceleration is 3 newtons due north. The change in momentum is 6 kg⋅m/s due north. The rate of change of momentum is 3 (kg⋅m/s)/s due north which is numerically equivalent to 3 newtons. Conservation In a closed system (one that does not exchange any matter with its surroundings and is not acted on by external forces) the total momentum remains constant. This fact, known as the law of conservation of momentum, is implied by Newton's laws of motion. Suppose, for example, that two particles interact. As explained by the third law, the forces between them are equal in magnitude but opposite in direction. If the particles are numbered 1 and 2, the second law states that and . Therefore, with the negative sign indicating that the forces oppose. Equivalently, If the velocities of the particles are and before the interaction, and afterwards they are and , then This law holds no matter how complicated the force is between particles. Similarly, if there are several particles, the momentum exchanged between each pair of particles adds to zero, so the total change in momentum is zero. The conservation of the total momentum of a number of interacting particles can be expressed as This conservation law applies to all interactions, including collisions (both elastic and inelastic) and separations caused by explosive forces. It can also be generalized to situations where Newton's laws do not hold, for example in the theory of relativity and in electrodynamics. Dependence on reference frame Momentum is a measurable quantity, and the measurement depends on the frame of reference. For example: if an aircraft of mass 1000 kg is flying through the air at a speed of 50 m/s its momentum can be calculated to be 50,000 kg.m/s. If the aircraft is flying into a headwind of 5 m/s its speed relative to the surface of the Earth is only 45 m/s and its momentum can be calculated to be 45,000 kg.m/s. Both calculations are equally correct. In both frames of reference, any change in momentum will be found to be consistent with the relevant laws of physics. Suppose is a position in an inertial frame of reference. From the point of view of another frame of reference, moving at a constant speed relative to the other, the position (represented by a primed coordinate) changes with time as This is called a Galilean transformation. If a particle is moving at speed in the first frame of reference, in the second, it is moving at speed Since does not change, the second reference frame is also an inertial frame and the accelerations are the same: Thus, momentum is conserved in both reference frames. Moreover, as long as the force has the same form, in both frames, Newton's second law is unchanged. Forces such as Newtonian gravity, which depend only on the scalar distance between objects, satisfy this criterion. This independence of reference frame is called Newtonian relativity or Galilean invariance. A change of reference frame can often simplify calculations of motion. For example, in a collision of two particles, a reference frame can be chosen where one particle begins at rest. Another commonly used reference frame is the center of mass frame – one that is moving with the center of mass. In this frame, the total momentum is zero. Application to collisions If two particles, each of known momentum, collide and coalesce, the law of conservation of momentum can be used to determine the momentum of the coalesced body. If the outcome of the collision is that the two particles separate, the law is not sufficient to determine the momentum of each particle. If the momentum of one particle after the collision is known, the law can be used to determine the momentum of the other particle. Alternatively if the combined kinetic energy after the collision is known, the law can be used to determine the momentum of each particle after the collision. Kinetic energy is usually not conserved. If it is conserved, the collision is called an elastic collision; if not, it is an inelastic collision. Elastic collisions An elastic collision is one in which no kinetic energy is transformed into heat or some other form of energy. Perfectly elastic collisions can occur when the objects do not touch each other, as for example in atomic or nuclear scattering where electric repulsion keeps the objects apart. A slingshot maneuver of a satellite around a planet can also be viewed as a perfectly elastic collision. A collision between two pool balls is a good example of an almost totally elastic collision, due to their high rigidity, but when bodies come in contact there is always some dissipation. A head-on elastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision and and after, the equations expressing conservation of momentum and kinetic energy are: A change of reference frame can simplify analysis of a collision. For example, suppose there are two bodies of equal mass , one stationary and one approaching the other at a speed (as in the figure). The center of mass is moving at speed and both bodies are moving towards it at speed . Because of the symmetry, after the collision both must be moving away from the center of mass at the same speed. Adding the speed of the center of mass to both, we find that the body that was moving is now stopped and the other is moving away at speed . The bodies have exchanged their velocities. Regardless of the velocities of the bodies, a switch to the center of mass frame leads us to the same conclusion. Therefore, the final velocities are given by In general, when the initial velocities are known, the final velocities are given by If one body has much greater mass than the other, its velocity will be little affected by a collision while the other body will experience a large change. Inelastic collisions In an inelastic collision, some of the kinetic energy of the colliding bodies is converted into other forms of energy (such as heat or sound). Examples include traffic collisions, in which the effect of loss of kinetic energy can be seen in the damage to the vehicles; electrons losing some of their energy to atoms (as in the Franck–Hertz experiment); and particle accelerators in which the kinetic energy is converted into mass in the form of new particles. In a perfectly inelastic collision (such as a bug hitting a windshield), both bodies have the same motion afterwards. A head-on inelastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision then in a perfectly inelastic collision both bodies will be travelling with velocity after the collision. The equation expressing conservation of momentum is: If one body is motionless to begin with (e.g. ), the equation for conservation of momentum is so In a different situation, if the frame of reference is moving at the final velocity such that , the objects would be brought to rest by a perfectly inelastic collision and 100% of the kinetic energy is converted to other forms of energy. In this instance the initial velocities of the bodies would be non-zero, or the bodies would have to be massless. One measure of the inelasticity of the collision is the coefficient of restitution , defined as the ratio of relative velocity of separation to relative velocity of approach. In applying this measure to a ball bouncing from a solid surface, this can be easily measured using the following formula: The momentum and energy equations also apply to the motions of objects that begin together and then move apart. For example, an explosion is the result of a chain reaction that transforms potential energy stored in chemical, mechanical, or nuclear form into kinetic energy, acoustic energy, and electromagnetic radiation. Rockets also make use of conservation of momentum: propellant is thrust outward, gaining momentum, and an equal and opposite momentum is imparted to the rocket. Multiple dimensions Real motion has both direction and velocity and must be represented by a vector. In a coordinate system with axes, velocity has components in the -direction, in the -direction, in the -direction. The vector is represented by a boldface symbol: Similarly, the momentum is a vector quantity and is represented by a boldface symbol: The equations in the previous sections, work in vector form if the scalars and are replaced by vectors and . Each vector equation represents three scalar equations. For example, represents three equations: The kinetic energy equations are exceptions to the above replacement rule. The equations are still one-dimensional, but each scalar represents the magnitude of the vector, for example, Each vector equation represents three scalar equations. Often coordinates can be chosen so that only two components are needed, as in the figure. Each component can be obtained separately and the results combined to produce a vector result. A simple construction involving the center of mass frame can be used to show that if a stationary elastic sphere is struck by a moving sphere, the two will head off at right angles after the collision (as in the figure). Objects of variable mass The concept of momentum plays a fundamental role in explaining the behavior of variable-mass objects such as a rocket ejecting fuel or a star accreting gas. In analyzing such an object, one treats the object's mass as a function that varies with time: . The momentum of the object at time is therefore . One might then try to invoke Newton's second law of motion by saying that the external force on the object is related to its momentum by , but this is incorrect, as is the related expression found by applying the product rule to : This equation does not correctly describe the motion of variable-mass objects. The correct equation is where is the velocity of the ejected/accreted mass as seen in the object's rest frame. This is distinct from , which is the velocity of the object itself as seen in an inertial frame. This equation is derived by keeping track of both the momentum of the object as well as the momentum of the ejected/accreted mass (). When considered together, the object and the mass () constitute a closed system in which total momentum is conserved. Generalized Newton's laws can be difficult to apply to many kinds of motion because the motion is limited by constraints. For example, a bead on an abacus is constrained to move along its wire and a pendulum bob is constrained to swing at a fixed distance from the pivot. Many such constraints can be incorporated by changing the normal Cartesian coordinates to a set of generalized coordinates that may be fewer in number. Refined mathematical methods have been developed for solving mechanics problems in generalized coordinates. They introduce a generalized momentum, also known as the canonical momentum or conjugate momentum, that extends the concepts of both linear momentum and angular momentum. To distinguish it from generalized momentum, the product of mass and velocity is also referred to as mechanical momentum, kinetic momentum or kinematic momentum. The two main methods are described below. Lagrangian mechanics In Lagrangian mechanics, a Lagrangian is defined as the difference between the kinetic energy and the potential energy : If the generalized coordinates are represented as a vector and time differentiation is represented by a dot over the variable, then the equations of motion (known as the Lagrange or Euler–Lagrange equations) are a set of equations: If a coordinate is not a Cartesian coordinate, the associated generalized momentum component does not necessarily have the dimensions of linear momentum. Even if is a Cartesian coordinate, will not be the same as the mechanical momentum if the potential depends on velocity. Some sources represent the kinematic momentum by the symbol . In this mathematical framework, a generalized momentum is associated with the generalized coordinates. Its components are defined as Each component is said to be the conjugate momentum for the coordinate . Now if a given coordinate does not appear in the Lagrangian (although its time derivative might appear), then is constant. This is the generalization of the conservation of momentum. Even if the generalized coordinates are just the ordinary spatial coordinates, the conjugate momenta are not necessarily the ordinary momentum coordinates. An example is found in the section on electromagnetism. Hamiltonian mechanics In Hamiltonian mechanics, the Lagrangian (a function of generalized coordinates and their derivatives) is replaced by a Hamiltonian that is a function of generalized coordinates and momentum. The Hamiltonian is defined as where the momentum is obtained by differentiating the Lagrangian as above. The Hamiltonian equations of motion are As in Lagrangian mechanics, if a generalized coordinate does not appear in the Hamiltonian, its conjugate momentum component is conserved. Symmetry and conservation Conservation of momentum is a mathematical consequence of the homogeneity (shift symmetry) of space (position in space is the canonical conjugate quantity to momentum). That is, conservation of momentum is a consequence of the fact that the laws of physics do not depend on position; this is a special case of Noether's theorem. For systems that do not have this symmetry, it may not be possible to define conservation of momentum. Examples where conservation of momentum does not apply include curved spacetimes in general relativity or time crystals in condensed matter physics. Momentum density In deformable bodies and fluids Conservation in a continuum In fields such as fluid dynamics and solid mechanics, it is not feasible to follow the motion of individual atoms or molecules. Instead, the materials must be approximated by a continuum in which, at each point, there is a particle or fluid parcel that is assigned the average of the properties of atoms in a small region nearby. In particular, it has a density and velocity that depend on time and position . The momentum per unit volume is . Consider a column of water in hydrostatic equilibrium. All the forces on the water are in balance and the water is motionless. On any given drop of water, two forces are balanced. The first is gravity, which acts directly on each atom and molecule inside. The gravitational force per unit volume is , where is the gravitational acceleration. The second force is the sum of all the forces exerted on its surface by the surrounding water. The force from below is greater than the force from above by just the amount needed to balance gravity. The normal force per unit area is the pressure . The average force per unit volume inside the droplet is the gradient of the pressure, so the force balance equation is If the forces are not balanced, the droplet accelerates. This acceleration is not simply the partial derivative because the fluid in a given volume changes with time. Instead, the material derivative is needed: Applied to any physical quantity, the material derivative includes the rate of change at a point and the changes due to advection as fluid is carried past the point. Per unit volume, the rate of change in momentum is equal to . This is equal to the net force on the droplet. Forces that can change the momentum of a droplet include the gradient of the pressure and gravity, as above. In addition, surface forces can deform the droplet. In the simplest case, a shear stress , exerted by a force parallel to the surface of the droplet, is proportional to the rate of deformation or strain rate. Such a shear stress occurs if the fluid has a velocity gradient because the fluid is moving faster on one side than another. If the speed in the direction varies with , the tangential force in direction per unit area normal to the direction is where is the viscosity. This is also a flux, or flow per unit area, of -momentum through the surface. Including the effect of viscosity, the momentum balance equations for the incompressible flow of a Newtonian fluid are These are known as the Navier–Stokes equations. The momentum balance equations can be extended to more general materials, including solids. For each surface with normal in direction and force in direction , there is a stress component . The nine components make up the Cauchy stress tensor , which includes both pressure and shear. The local conservation of momentum is expressed by the Cauchy momentum equation: where is the body force. The Cauchy momentum equation is broadly applicable to deformations of solids and liquids. The relationship between the stresses and the strain rate depends on the properties of the material (see Types of viscosity). Acoustic waves A disturbance in a medium gives rise to oscillations, or waves, that propagate away from their source. In a fluid, small changes in pressure can often be described by the acoustic wave equation: where is the speed of sound. In a solid, similar equations can be obtained for propagation of pressure (P-waves) and shear (S-waves). The flux, or transport per unit area, of a momentum component by a velocity is equal to . In the linear approximation that leads to the above acoustic equation, the time average of this flux is zero. However, nonlinear effects can give rise to a nonzero average. It is possible for momentum flux to occur even though the wave itself does not have a mean momentum. In electromagnetics Particle in a field In Maxwell's equations, the forces between particles are mediated by electric and magnetic fields. The electromagnetic force (Lorentz force) on a particle with charge due to a combination of electric field and magnetic field is (in SI units). It has an electric potential and magnetic vector potential . In the non-relativistic regime, its generalized momentum is while in relativistic mechanics this becomes The quantity is sometimes called the potential momentum. It is the momentum due to the interaction of the particle with the electromagnetic fields. The name is an analogy with the potential energy , which is the energy due to the interaction of the particle with the electromagnetic fields. These quantities form a four-vector, so the analogy is consistent; besides, the concept of potential momentum is important in explaining the so-called hidden momentum of the electromagnetic fields. Conservation In Newtonian mechanics, the law of conservation of momentum can be derived from the law of action and reaction, which states that every force has a reciprocating equal and opposite force. Under some circumstances, moving charged particles can exert forces on each other in non-opposite directions. Nevertheless, the combined momentum of the particles and the electromagnetic field is conserved. Vacuum The Lorentz force imparts a momentum to the particle, so by Newton's second law the particle must impart a momentum to the electromagnetic fields. In a vacuum, the momentum per unit volume is where is the vacuum permeability and is the speed of light. The momentum density is proportional to the Poynting vector which gives the directional rate of energy transfer per unit area: If momentum is to be conserved over the volume over a region , changes in the momentum of matter through the Lorentz force must be balanced by changes in the momentum of the electromagnetic field and outflow of momentum. If is the momentum of all the particles in , and the particles are treated as a continuum, then Newton's second law gives The electromagnetic momentum is and the equation for conservation of each component of the momentum is The term on the right is an integral over the surface area of the surface representing momentum flow into and out of the volume, and is a component of the surface normal of . The quantity is called the Maxwell stress tensor, defined as Media The above results are for the microscopic Maxwell equations, applicable to electromagnetic forces in a vacuum (or on a very small scale in media). It is more difficult to define momentum density in media because the division into electromagnetic and mechanical is arbitrary. The definition of electromagnetic momentum density is modified to where the H-field is related to the B-field and the magnetization by The electromagnetic stress tensor depends on the properties of the media. Non-classical Quantum mechanical In quantum mechanics, momentum is defined as a self-adjoint operator on the wave function. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables. For a single particle described in the position basis the momentum operator can be written as where is the gradient operator, is the reduced Planck constant, and is the imaginary unit. This is a commonly encountered form of the momentum operator, though the momentum operator in other bases can take other forms. For example, in momentum space the momentum operator is represented by the eigenvalue equation where the operator acting on a wave eigenfunction yields that wave function multiplied by the eigenvalue , in an analogous fashion to the way that the position operator acting on a wave function yields that wave function multiplied by the eigenvalue . For both massive and massless objects, relativistic momentum is related to the phase constant by Electromagnetic radiation (including visible light, ultraviolet light, and radio waves) is carried by photons. Even though photons (the particle aspect of light) have no mass, they still carry momentum. This leads to applications such as the solar sail. The calculation of the momentum of light within dielectric media is somewhat controversial (see Abraham–Minkowski controversy). Relativistic Lorentz invariance Newtonian physics assumes that absolute time and space exist outside of any observer; this gives rise to Galilean invariance. It also results in a prediction that the speed of light can vary from one reference frame to another. This is contrary to what has been observed. In the special theory of relativity, Einstein keeps the postulate that the equations of motion do not depend on the reference frame, but assumes that the speed of light is invariant. As a result, position and time in two reference frames are related by the Lorentz transformation instead of the Galilean transformation. Consider, for example, one reference frame moving relative to another at velocity in the direction. The Galilean transformation gives the coordinates of the moving frame as while the Lorentz transformation gives where is the Lorentz factor: Newton's second law, with mass fixed, is not invariant under a Lorentz transformation. However, it can be made invariant by making the inertial mass of an object a function of velocity: is the object's invariant mass. The modified momentum, obeys Newton's second law: Within the domain of classical mechanics, relativistic momentum closely approximates Newtonian momentum: at low velocity, is approximately equal to , the Newtonian expression for momentum. Four-vector formulation In the theory of special relativity, physical quantities are expressed in terms of four-vectors that include time as a fourth coordinate along with the three space coordinates. These vectors are generally represented by capital letters, for example for position. The expression for the four-momentum depends on how the coordinates are expressed. Time may be given in its normal units or multiplied by the speed of light so that all the components of the four-vector have dimensions of length. If the latter scaling is used, an interval of proper time, , defined by is invariant under Lorentz transformations (in this expression and in what follows the metric signature has been used, different authors use different conventions). Mathematically this invariance can be ensured in one of two ways: by treating the four-vectors as Euclidean vectors and multiplying time by ; or by keeping time a real quantity and embedding the vectors in a Minkowski space. In a Minkowski space, the scalar product of two four-vectors and is defined as In all the coordinate systems, the (contravariant) relativistic four-velocity is defined by and the (contravariant) four-momentum is where is the invariant mass. If (in Minkowski space), then Using Einstein's mass–energy equivalence, , this can be rewritten as Thus, conservation of four-momentum is Lorentz-invariant and implies conservation of both mass and energy. The magnitude of the momentum four-vector is equal to : and is invariant across all reference frames. The relativistic energy–momentum relationship holds even for massless particles such as photons; by setting it follows that In a game of relativistic "billiards", if a stationary particle is hit by a moving particle in an elastic collision, the paths formed by the two afterwards will form an acute angle. This is unlike the non-relativistic case where they travel at right angles. The four-momentum of a planar wave can be related to a wave four-vector For a particle, the relationship between temporal components, , is the Planck–Einstein relation, and the relation between spatial components, , describes a de Broglie matter wave. History of the concept Impetus John Philoponus In about 530 AD, John Philoponus developed a concept of momentum in On Physics, a commentary to Aristotle's Physics. Aristotle claimed that everything that is moving must be kept moving by something. For example, a thrown ball must be kept moving by motions of the air. Philoponus pointed out the absurdity in Aristotle's claim that motion of an object is promoted by the same air that is resisting its passage. He proposed instead that an impetus was imparted to the object in the act of throwing it. Ibn Sīnā In 1020, Ibn Sīnā (also known by his Latinized name Avicenna) read Philoponus and published his own theory of motion in The Book of Healing. He agreed that an impetus is imparted to a projectile by the thrower; but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as a persistent, requiring external forces such as air resistance to dissipate it. Peter Olivi, Jean Buridan In the 13th and 14th century, Peter Olivi and Jean Buridan read and refined the work of Philoponus, and possibly that of Ibn Sīnā. Buridan, who in about 1350 was made rector of the University of Paris, referred to impetus being proportional to the weight times the speed. Moreover, Buridan's theory was different from his predecessor's in that he did not consider impetus to be self-dissipating, asserting that a body would be arrested by the forces of air resistance and gravity which might be opposing its impetus. Quantity of motion René Descartes In Principles of Philosophy (Principia Philosophiae) from 1644, the French philosopher René Descartes defined "quantity of motion" (Latin: quantitas motus) as the product of size and speed, and claimed that the total quantity of motion in the universe is conserved. This should not be read as a statement of the modern law of conservation of momentum, since Descartes had no concept of mass as distinct from weight and size. (The concept of mass, as distinct from weight, was introduced by Newton in 1686.) More important, he believed that it is speed rather than velocity that is conserved. So for Descartes, if a moving object were to bounce off a surface, changing its direction but not its speed, there would be no change in its quantity of motion. Galileo, in his Two New Sciences (published in 1638), used the Italian word to similarly describe Descartes's quantity of motion. Christiaan Huygens In the 1600s, Christiaan Huygens concluded quite early that Descartes's laws for the elastic collision of two bodies must be wrong, and he formulated the correct laws. An important step was his recognition of the Galilean invariance of the problems. His views then took many years to be circulated. He passed them on in person to William Brouncker and Christopher Wren in London, in 1661. What Spinoza wrote to Henry Oldenburg about them, in 1666 during the Second Anglo-Dutch War, was guarded. Huygens had actually worked them out in a manuscript in the period 1652–1656. The war ended in 1667, and Huygens announced his results to the Royal Society in 1668. He published them in the in 1669. Momentum John Wallis In 1670, John Wallis, in , stated the law of conservation of momentum: "the initial state of the body, either of rest or of motion, will persist" and "If the force is greater than the resistance, motion will result". Wallis used momentum for quantity of motion, and for force. Gottfried Leibniz In 1686, Gottfried Wilhelm Leibniz, in Discourse on Metaphysics, gave an argument against Descartes' construction of the conservation of the "quantity of motion" using an example of dropping blocks of different sizes different distances. He points out that force is conserved but quantity of motion, construed as the product of size and speed of an object, is not conserved. Isaac Newton In 1687, Isaac Newton, in , just like Wallis, showed a similar casting around for words to use for the mathematical momentum. His Definition II defines , "quantity of motion", as "arising from the velocity and quantity of matter conjointly", which identifies it as momentum. Thus when in Law II he refers to , "change of motion", being proportional to the force impressed, he is generally taken to mean momentum and not motion. John Jennings In 1721, John Jennings published Miscellanea, where the momentum in its current mathematical sense is attested, five years before the final edition of Newton's . Momentum or "quantity of motion" was being defined for students as "a rectangle", the product of and , where is "quantity of material" and is "velocity", . In 1728, the Cyclopedia states:
Physical sciences
Physics
null
20432
https://en.wikipedia.org/wiki/Mood%20stabilizer
Mood stabilizer
A mood stabilizer is a psychiatric medication used to treat mood disorders characterized by intense and sustained mood shifts, such as bipolar disorder and the bipolar type of schizoaffective disorder. Uses Mood stabilizers are best known for the treatment of bipolar disorder, preventing mood shifts to mania (or hypomania) and depression. Mood stabilizers are also used in schizoaffective disorder when it is the bipolar type. Examples The term "mood stabilizer" does not describe a mechanism, but rather an effect. More precise terminology based on pharmacology is used to further classify these agents. Drugs commonly classed as mood stabilizers include: Mineral Lithium – Lithium is the "classic" mood stabilizer, the first to be approved by the US FDA, and still popular in treatment. Therapeutic drug monitoring is required to ensure lithium levels remain in the therapeutic range: 0.6 to 0.8 or 0.8–1.2 mEq/L (or millimolar). Signs and symptoms of toxicity include nausea, vomiting, diarrhea, and ataxia. The most common side effects are lethargy and weight gain. The less common side effects of using lithium are blurred vision, a slight tremble in the hands, and a feeling of being mildly ill. In general, these side effects occur in the first few weeks after commencing lithium treatment. These symptoms can often be improved by lowering the dose. Anticonvulsants Many agents described as "mood stabilizers" are also categorized as anticonvulsants. The term "anticonvulsant mood stabilizers" is sometimes used to describe these as a class. Although this group is also defined by effect rather than mechanism, there is at least a preliminary understanding of the mechanism of most of the anticonvulsants used in the treatment of mood disorders. Valproate – Available in extended release form. This drug can be very irritating to the stomach, especially when taken as a free acid. Liver function and CBC should be monitored. Lamotrigine (aka Lamictal) – FDA approved for bipolar disorder maintenance therapy, not for acute mood problems like depression or mania/hypomania. The usual target dose is 100–200 mg daily, titrated to by 25 mg increments every 2 weeks. Lamotrigine can cause Stevens–Johnson syndrome, a very rare but potentially fatal skin condition. Carbamazepine – FDA approved for the treatment of acute manic or mixed (i.e., both depressed and manic mood features) episodes in people with bipolar disorder type I. Carbamazepine can rarely cause a dangerous decrease in neutrophils, a type of white blood cell, called agranulocytosis. It interacts with many medications, including other mood stabilizers (e.g. lamotrigine) and antipsychotics (e.g. quetiapine). There is insufficient evidence to support the use of various other anticonvulsants, such as gabapentin and topiramate, as mood stabilizers. Antipsychotics Some atypical antipsychotics (aripiprazole, asenapine, cariprazine, lurasidone, olanzapine, paliperidone, quetiapine, risperidone, and ziprasidone) also have mood stabilizing effects and are thus commonly prescribed even when psychotic symptoms are absent. Other It is also conjectured that omega-3 fatty acids may have a mood stabilizing effect. Compared with placebo, omega-3 fatty acids appear better able to augment known mood stabilizers in reducing depressive (but perhaps not manic) symptoms of bipolar disorder; additional trials would be needed to establish the effects of omega-3 fatty acids alone. It is known that even subclinical hypothyroidism can blunt a patient's response to both mood stabilizers and antidepressants. Furthermore, preliminary research into the use of thyroid augmentation in patients with refractory and rapid-cycling bipolar disorder has been positive, showing a slowing in cycle frequency and reduction in symptoms. Most studies have been conducted on an open-label basis. One large, controlled study of 300 mcg daily dose of levothyroxine (T4) found it superior to placebo for this purpose. In general, studies have shown T4 to be well tolerated and to show efficacy even in patients without overt hypothyroidism. Combination therapy In routine practice, monotherapy is often not sufficiently effective for acute and/or maintenance therapy and thus most patients are given combination therapies. Combination therapy (atypical antipsychotic with lithium or valproate) shows better efficacy over monotherapy in the manic phase in terms of efficacy and prevention of relapse. However, side effects are more frequent and discontinuation rates due to adverse events are higher with combination therapy than with monotherapy. Relationship to antidepressants Most mood stabilizers are primarily antimanic agents, meaning that they are effective at treating mania and mood cycling and shifting, but are not effective at treating acute depression. The principal exceptions to that rule, because they treat both manic and depressive symptoms, are lamotrigine, lithium carbonate, olanzapine and quetiapine. There is a need for caution when treating bipolar patients with antidepressant medication due to the risks that they pose. Nevertheless, antidepressants are still often prescribed in addition to mood stabilizers during depressive phases. This brings some risks, however, as antidepressants can induce mania, psychosis, and other disturbing problems in people with bipolar disorder—in particular, when taken alone. The risk of antidepressant-induced mania when given to patients concomitantly on antimanic agents is not known for certain but may still exist. The majority of antidepressants appear ineffective in treating bipolar depression. Antidepressants cause several risks when given to bipolar patients. They are ineffective in treating acute bipolar depression, preventing relapse, and can cause rapid cycling. The changes are not often easy to detect and require monitoring by family and mental health professionals. Studies have shown that antidepressants have no benefit versus a placebo or other treatment. Antidepressants can also lead to a higher rate of non-lethal suicidal behavior. Relapse can also be related to treatment with antidepressants. This is less likely to occur if a mood stabilizer is combined with an antidepressant, rather than an antidepressant being used alone. Evidence from previous studies shows that rapid cycling is linked to use of antidepressants. Rapid cycling is defined as the presence of four or more mood episodes within a year's time. Evidence suggests that rapid cycling and mixed symptoms have become more common since antidepressant medication has come into widespread use. Pharmacodynamics The precise mechanism of action of lithium is still unknown, and it is suspected that it acts at various points of the neuron between the nucleus and the synapse. Lithium is known to inhibit the enzyme GSK-3B. This improves the functioning of the circadian clock—which is thought to be often malfunctioning in people with bipolar disorder—and positively modulates gene transcription of brain-derived neurotrophic factor (BDNF). The resulting increase in neural plasticity may be central to lithium's therapeutic effects. How lithium works in the human body is not completely understood, but its benefits are most likely related to its effects on electrolytes such as potassium, sodium, calcium and magnesium. All of the anticonvulsants routinely used to treat bipolar disorder are blockers of voltage-gated sodium channels, affecting the brain's glutamate system. For valproic acid, carbamazepine and oxcarbazepine, however, their mood-stabilizing effects may be more related to effects on the GABAergic system. Lamotrigine is known to decrease the patient's cortisol response to stress. One possible downstream target of several mood stabilizers such as lithium, valproate, and carbamazepine is the arachidonic acid cascade.
Biology and health sciences
Psychiatric drugs
Health
20437
https://en.wikipedia.org/wiki/Mass%20transfer
Mass transfer
Mass transfer is the net movement of mass from one location (usually meaning stream, phase, fraction, or component) to another. Mass transfer occurs in many processes, such as absorption, evaporation, drying, precipitation, membrane filtration, and distillation. Mass transfer is used by different scientific disciplines for different processes and mechanisms. The phrase is commonly used in engineering for physical processes that involve diffusive and convective transport of chemical species within physical systems. Some common examples of mass transfer processes are the evaporation of water from a pond to the atmosphere, the purification of blood in the kidneys and liver, and the distillation of alcohol. In industrial processes, mass transfer operations include separation of chemical components in distillation columns, absorbers such as scrubbers or stripping, adsorbers such as activated carbon beds, and liquid-liquid extraction. Mass transfer is often coupled to additional transport processes, for instance in industrial cooling towers. These towers couple heat transfer to mass transfer by allowing hot water to flow in contact with air. The water is cooled by expelling some of its content in the form of water vapour. Astrophysics In astrophysics, mass transfer is the process by which matter gravitationally bound to a body, usually a star, fills its Roche lobe and becomes gravitationally bound to a second body, usually a compact object (white dwarf, neutron star or black hole), and is eventually accreted onto it. It is a common phenomenon in binary systems, and may play an important role in some types of supernovae and pulsars. Chemical engineering Mass transfer finds extensive application in chemical engineering problems. It is used in reaction engineering, separations engineering, heat transfer engineering, and many other sub-disciplines of chemical engineering like electrochemical engineering. The driving force for mass transfer is usually a difference in chemical potential, when it can be defined, though other thermodynamic gradients may couple to the flow of mass and drive it as well. A chemical species moves from areas of high chemical potential to areas of low chemical potential. Thus, the maximum theoretical extent of a given mass transfer is typically determined by the point at which the chemical potential is uniform. For single phase-systems, this usually translates to uniform concentration throughout the phase, while for multiphase systems chemical species will often prefer one phase over the others and reach a uniform chemical potential only when most of the chemical species has been absorbed into the preferred phase, as in liquid-liquid extraction. While thermodynamic equilibrium determines the theoretical extent of a given mass transfer operation, the actual rate of mass transfer will depend on additional factors including the flow patterns within the system and the diffusivities of the species in each phase. This rate can be quantified through the calculation and application of mass transfer coefficients for an overall process. These mass transfer coefficients are typically published in terms of dimensionless numbers, often including Péclet numbers, Reynolds numbers, Sherwood numbers, and Schmidt numbers, among others. Analogies between heat, mass, and momentum transfer There are notable similarities in the commonly used approximate differential equations for momentum, heat, and mass transfer. The molecular transfer equations of Newton's law for fluid momentum at low Reynolds number (Stokes flow), Fourier's law for heat, and Fick's law for mass are very similar, since they are all linear approximations to transport of conserved quantities in a flow field. At higher Reynolds number, the analogy between mass and heat transfer and momentum transfer becomes less useful due to the nonlinearity of the Navier-Stokes equation (or more fundamentally, the general momentum conservation equation), but the analogy between heat and mass transfer remains good. A great deal of effort has been devoted to developing analogies among these three transport processes so as to allow prediction of one from any of the others.
Physical sciences
Chemical engineering
Chemistry
20474
https://en.wikipedia.org/wiki/Mohs%20scale
Mohs scale
The Mohs scale ( ) of mineral hardness is a qualitative ordinal scale, from 1 to 10, characterizing scratch resistance of minerals through the ability of harder material to scratch softer material. The scale was introduced in 1812 by the German geologist and mineralogist Friedrich Mohs, in his book (English: Attempt at an elementary method for the natural-historical determination and recognition of fossils); it is one of several definitions of hardness in materials science, some of which are more quantitative. The method of comparing hardness by observing which minerals can scratch others is of great antiquity, having been mentioned by Theophrastus in his treatise On Stones, , followed by Pliny the Elder in his Naturalis Historia, . The Mohs scale is useful for identification of minerals in the field, but is not an accurate predictor of how well materials endure in an industrial setting. Reference minerals The Mohs scale of mineral hardness is based on the ability of one natural sample of mineral to visibly scratch another mineral. Minerals are chemically pure solids found in nature. Rocks are mixtures of one or more minerals. {| style="float:right;border:none;background:white;margin-left:2em" |- style="text-align:center;" | |- |style="padding-left:2em;"| Mohs scale along the horizontal axis matched with one of the absolute hardness scales along the vertical. Diamond (Mohs 10) is 1500 (off scale). |} Diamond was the hardest known naturally occurring mineral when the scale was designed, and defines the top of the scale, arbitrarily set at 10. The hardness of a material is measured against the scale by finding the hardest material that the given material can scratch, or the softest material that can scratch the given material. For example, if some material is scratched by apatite but not by fluorite, its hardness on the Mohs scale would be between 4 and 5. Technically, "scratching" a material for the purposes of the Mohs scale means creating non-elastic dislocations visible to the naked eye. Frequently, materials that are lower on the Mohs scale can create microscopic, non-elastic dislocations on materials that have a higher Mohs number. While these microscopic dislocations are permanent and sometimes detrimental to the harder material's structural integrity, they are not considered "scratches" for the determination of a Mohs scale number. Each of the ten hardness values in the Mohs scale is represented by a reference mineral, most of which are widespread in rocks. The Mohs scale is an ordinal scale. For example, corundum (9) is twice as hard as topaz (8), but diamond (10) is four times as hard as corundum. The table below shows the comparison with the absolute hardness measured by a sclerometer, with images of the reference minerals in the rightmost column. {| class="wikitable sortable" style="text-align:center" |- ! Mohshardness ! Referencemineral ! Chemical formula ! Absolutehardness !class="unsortable"| Example image |- | 1 | Talc | | 1 | |- | 2 | Gypsum | | 2 | |- | 3 | Calcite | | 14 | |- | 4 | Fluorite | | 21 | |- | 5 | Apatite | | 48 | |- | 6 | Orthoclasefeldspar | | 72 | |- | 7 | Quartz | | 100 | |- | 8 | Topaz | | 200 | |- | 9 | Corundum | | 400 | |- | 10 | Diamond | | 1500 | |} Examples Below is a table of more materials by Mohs scale. Some of them have a hardness between two of the Mohs scale reference minerals. Some solid substances that are not minerals have been assigned a hardness on the Mohs scale. However, hardness can make it difficult to determine if the substance is a mixture of other substances or if it may be misleading or meaningless. For example, some sources have assigned a Mohs hardness of 6 or 7 to granite but it is a rock made of several minerals, each with its own Mohs hardness (e.g. topaz-rich granite contains: topaz — Mohs 8, quartz — Mohs 7, orthoclase — Mohs 6, plagioclase — Mohs 6–6.5, mica — Mohs 2–4). {| class="wikitable" |- ! Hardness ! Substance |- |style="text-align:center;"| 0.2–0.4 | Potassium |- |style="text-align:center;"| 0.5–0.6 | Lithium |- |style="text-align:center;"| 1 | Talc |- |style="text-align:center;"| 1.5 | Lead |- |style="text-align:center;"| 2 | Hardwood |- |style="text-align:center;"| 2–2.5 | Plastic |- |style="text-align:center;"| 2.5 | Zinc |- |style="text-align:center;"| 2.5–3 | Copper |- |style="text-align:center;"| 3 | Brass |- |style="text-align:center;"| 3.5 | Adamite |- |style="text-align:center;"| 3.5-4 | Sphalerite |- |style="text-align:center;"| 4 | Iron |- |style="text-align:center;"| 4–4.5 | Ordinary steel |- |style="text-align:center;"| 4.5 | Colemanite |- |style="text-align:center;"| 5 | Apatite |- |style="text-align:center;"| 5-5.5 | Goethite |- |style="text-align:center;"| 5.5 | Glass |- |style="text-align:center;"| 5.5–6 | Opal |- |style="text-align:center;"| 6 | Rhodium |- |style="text-align:center;"| 6-6.5 | Rutile |- |style="text-align:center;"| 6.5 | Silicon |- |style="text-align:center;"| 6.5–7 | Jadeite |- |style="text-align:center;"| 7 | Porcelain |- |style="text-align:center;"| 7-7.5 | Garnet |- |style="text-align:center;"| 7.5 | Tungsten |- |style="text-align:center;"| 7.5–8 | Emerald |- |style="text-align:center;"| 8 | Topaz |- |style="text-align:center;"| 8.5 | Chromium |- |style="text-align:center;"| 9 | Sapphire |- |style="text-align:center;"| 9–9.5 | Moissanite |- |style="text-align:center;"| 9.5–near 10 | Boron |- |style="text-align:center;"| 10 | Diamond |} Use Despite its lack of precision, the Mohs scale is relevant for field geologists, who use it to roughly identify minerals using scratch kits. The Mohs scale hardness of minerals can be commonly found in reference sheets. Mohs hardness is useful in milling. It allows the assessment of which type of mill and grinding medium will best reduce a given product whose hardness is known. Electronic manufacturers use the scale for testing the resilience of flat panel display components (such as cover glass for LCDs or encapsulation for OLEDs), as well as to evaluate the hardness of touch screens in consumer electronics. Comparison with Vickers scale Comparison between Mohs hardness and Vickers hardness: {|class="wikitable" style="text-align:center" |- ! Mineralname ! Hardness (Mohs) ! Hardness (Vickers)(kg/mm) |- |Tin||1.5||VHN = 7–9 |- |Bismuth||2–2.5||VHN = 16–18 |- |Gold||2.5||VHN = 30–34 |- |Silver||2.5||VHN = 61–65 |- |Chalcocite||2.5–3||VHN = 84–87 |- |Copper||2.5–3||VHN = 77–99 |- |Galena||2.5||VHN = 79–104 |- |Sphalerite||3.5–4||VHN = 208–224 |- |Heazlewoodite||4||VHN = 230–254 |- |Goethite||5–5.5||VHN = 667 |- |Chromite||5.5||VHN = 1,278–1,456 |- |Anatase||5.5–6||VHN = 616–698 |- |Rutile||6–6.5||VHN = 894–974 |- |Pyrite||6–6.5||VHN = 1,505–1,520 |- |Bowieite||7||VHN = 858–1,288 |- |Euclase||7.5||VHN = 1,310 |- |Chromium||8.5||VHN = 1,875–2,000 |}
Physical sciences
Geology: General
Earth science
20479
https://en.wikipedia.org/wiki/Magnetosphere
Magnetosphere
In astronomy and planetary science, a magnetosphere is a region of space surrounding an astronomical object in which charged particles are affected by that object's magnetic field. It is created by a celestial body with an active interior dynamo. In the space environment close to a planetary body with a dipole magnetic field such as Earth, the field lines resemble a simple magnetic dipole. Farther out, field lines can be significantly distorted by the flow of electrically conducting plasma, as emitted from the Sun (i.e., the solar wind) or a nearby star. Planets having active magnetospheres, like the Earth, are capable of mitigating or blocking the effects of solar radiation or cosmic radiation. Interactions of particles and atmospheres with magnetospheres are studied under the specialized scientific subjects of plasma physics, space physics, and aeronomy. History Study of Earth's magnetosphere began in 1600, when William Gilbert discovered that the magnetic field on the surface of Earth resembled that of a terrella, a small, magnetized sphere. In the 1940s, Walter M. Elsasser proposed the model of dynamo theory, which attributes Earth's magnetic field to the motion of Earth's iron outer core. Through the use of magnetometers, scientists were able to study the variations in Earth's magnetic field as functions of both time and latitude and longitude. Beginning in the late 1940s, rockets were used to study cosmic rays. In 1958, Explorer 1, the first of the Explorer series of space missions, was launched to study the intensity of cosmic rays above the atmosphere and measure the fluctuations in this activity. This mission observed the existence of the Van Allen radiation belt (located in the inner region of Earth's magnetosphere), with the follow-up Explorer 3 later that year definitively proving its existence. Also during 1958, Eugene Parker proposed the idea of the solar wind, with the term 'magnetosphere' being proposed by Thomas Gold in 1959 to explain how the solar wind interacted with the Earth's magnetic field. The later mission of Explorer 12 in 1961 led by the Cahill and Amazeen observation in 1963 of a sudden decrease in magnetic field strength near the noon-time meridian, later was named the magnetopause. By 1983, the International Cometary Explorer observed the magnetotail, or the distant magnetic field. Structure and behavior The structure of magnetospheres are dependent on several factors: the type of astronomical object, the nature of sources of plasma and momentum, the period of the object's spin, the nature of the axis about which the object spins, the axis of the magnetic dipole, and the magnitude and direction of the flow of solar wind. The planetary distance where the magnetosphere can withstand the solar wind pressure is called the Chapman–Ferraro distance. This is usefully modeled by the formula wherein represents the radius of the planet, represents the magnetic field on the surface of the planet at the equator, represents the velocity of the solar wind, is the particle density of solar wind, and is the vacuum permeability constant: A magnetosphere is classified as "intrinsic" when , or when the primary opposition to the flow of solar wind is the magnetic field of the object. Mercury, Earth, Jupiter, Ganymede, Saturn, Uranus, and Neptune, for example, exhibit intrinsic magnetospheres. A magnetosphere is classified as "induced" when , or when the solar wind is not opposed by the object's magnetic field. In this case, the solar wind interacts with the atmosphere or ionosphere of the planet (or surface of the planet, if the planet has no atmosphere). Venus has an induced magnetic field, which means that because Venus appears to have no internal dynamo effect, the only magnetic field present is that formed by the solar wind's wrapping around the physical obstacle of Venus (see also Venus' induced magnetosphere). When , the planet itself and its magnetic field both contribute. It is possible that Mars is of this type. Structure Bow shock The bow shock forms the outermost layer of the magnetosphere; the boundary between the magnetosphere and the surrounding medium. For stars, this is usually the boundary between the stellar wind and interstellar medium; for planets, the speed of the solar wind there decreases as it approaches the magnetopause. Due to interactions with the bow shock, the stellar wind plasma gains a substantial anisotropy, leading to various plasma instabilities upstream and downstream of the bow shock. Magnetosheath The magnetosheath is the region of the magnetosphere between the bow shock and the magnetopause. It is formed mainly from shocked solar wind, though it contains a small amount of plasma from the magnetosphere. It is an area exhibiting high particle energy flux, where the direction and magnitude of the magnetic field varies erratically. This is caused by the collection of solar wind gas that has effectively undergone thermalization. It acts as a cushion that transmits the pressure from the flow of the solar wind and the barrier of the magnetic field from the object. Magnetopause The magnetopause is the area of the magnetosphere wherein the pressure from the planetary magnetic field is balanced with the pressure from the solar wind. It is the convergence of the shocked solar wind from the magnetosheath with the magnetic field of the object and plasma from the magnetosphere. Because both sides of this convergence contain magnetized plasma, the interactions between them are complex. The structure of the magnetopause depends upon the Mach number and beta ratio of the plasma, as well as the magnetic field. The magnetopause changes size and shape as the pressure from the solar wind fluctuates. Magnetotail Opposite the compressed magnetic field is the magnetotail, where the magnetosphere extends far beyond the astronomical object. It contains two lobes, referred to as the northern and southern tail lobes. Magnetic field lines in the northern tail lobe point towards the object while those in the southern tail lobe point away. The tail lobes are almost empty, with few charged particles opposing the flow of the solar wind. The two lobes are separated by a plasma sheet, an area where the magnetic field is weaker, and the density of charged particles is higher. Earth's magnetosphere Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudes. However, at high altitudes, the magnetic field is significantly distorted by the solar wind and its solar magnetic field. On the dayside of Earth, the magnetic field is significantly compressed by the solar wind to a distance of approximately . Earth's bow shock is about thick and located about from Earth. The magnetopause exists at a distance of several hundred kilometers above Earth's surface. Earth's magnetopause has been compared to a sieve because it allows solar wind particles to enter. Kelvin–Helmholtz instabilities occur when large swirls of plasma travel along the edge of the magnetosphere at different velocities from the magnetosphere, causing the plasma to slip past. This results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphere. On Earth's nightside, the magnetic field extends in the magnetotail, which lengthwise exceeds . Earth's magnetotail is the primary source of the polar aurora. Also, NASA scientists have suggested that Earth's magnetotail might cause "dust storms" on the Moon by creating a potential difference between the day side and the night side. Other objects Many astronomical objects generate and maintain magnetospheres. In the Solar System this includes the Sun, Mercury, Earth, Jupiter, Saturn, Uranus, Neptune, and Ganymede. The magnetosphere of Jupiter is the largest planetary magnetosphere in the Solar System, extending up to on the dayside and almost to the orbit of Saturn on the nightside. Jupiter's magnetosphere is stronger than Earth's by an order of magnitude, and its magnetic moment is approximately 18,000 times larger. Venus, Mars, and Pluto, on the other hand, have no intrisic magnetic field. This may have had significant effects on their geological history. It is theorized that Venus and Mars may have lost their primordial water to photodissociation and the solar wind. A strong magnetosphere, were it present, would greatly slow down this process. Magnetospheres generated by exoplanets are thought to be common, though the first discoveries did not come until the 2010s. In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. In 2019, the strength of the surface magnetic fields of 4 hot Jupiters were estimated and ranged between 20 and 120 gauss compared to Jupiter's surface magnetic field of 4.3 gauss. In 2020, a radio emission in the 14-30 MHz band was detected from the Tau Boötis system, likely associated with cyclotron radiation from the poles of Tau Boötis b which might be a signature of a planetary magnetic field. In 2021 a magnetic field generated by the hot Neptune HAT-P-11b became the first to be confirmed. The first unconfirmed detection of a magnetic field generated by a terrestrial exoplanet was found in 2023 on YZ Ceti b.
Physical sciences
Planetary science
Astronomy
20501
https://en.wikipedia.org/wiki/Moose
Moose
The moose (: 'moose'; used in North America) or elk (: 'elk' or 'elks'; used in Eurasia) (Alces alces) is the world's tallest, largest and heaviest extant species of deer and the only species in the genus Alces. It is also the tallest, and the second-largest, land animal in North America, falling short only of the American bison in body mass. Most adult male moose have broad, palmate ("open-hand shaped") antlers; other members of the deer family have pointed antlers with a dendritic ("twig-like") configuration. Moose inhabit the circumpolar boreal forests or temperate broadleaf and mixed forests of the Northern Hemisphere, thriving in cooler, temperate areas as well as subarctic climates. Hunting shaped the relationship between moose and humans, both in Eurasia and North America. Prior to the colonial era (around 1600–1700 CE), moose were one of many valuable sources of sustenance for certain tribal groups and First Nations. Hunting and habitat loss have reduced the moose's range; this fragmentation has led to sightings of "urban moose" in some areas. The moose has been reintroduced to some of its former habitats. Currently, the greatest populations occur in Canada, where they can be found in all provinces (excepting Nunavut and Prince Edward Island); additionally, substantial numbers of moose are found in Alaska, New England (with Maine having the most of the contiguous United States), the State of New York, Fennoscandia, the Baltic states, the Caucasus region, Belarus, Poland, Eastern Europe, Mongolia, Kazakhstan, and Russia. In the United States (outside of Alaska and New England), most moose are found further to the north, west and northeast (including Colorado, Idaho, Michigan, Minnesota, Montana, North Dakota, Utah, Vermont, Wisconsin and Wyoming), and individuals have been documented wandering as far south as western Oklahoma, northeastern Arizona and northwestern New Mexico. Predominantly a browser, the moose's diet consists of both terrestrial and aquatic vegetation, depending on the season, with branches, twigs and dead wood making up a large portion of their winter diet. Predators of moose include wolves, bears, humans, wolverines (rarely, though may take calves), and (rarely, if swimming in the ocean) orcas. Unlike most other deer species, moose do not form herds and are solitary animals, aside from calves who remain with their mother until the cow begins estrus again (typically 18 months after the birth of a calf). At this point, the cow chases her calf away. Although generally slow-moving and sedentary, moose can become defensively aggressive, and move very quickly if angered or startled. Their mating season in the autumn features energetic fights between males competing for a female. Taxonomy Etymology Alces alces is called a "moose" in North American English, but an "elk" in British English. The word "elk" in North American English refers to a completely different species of deer, Cervus canadensis, also called the wapiti (from Algonquin). A mature male moose is called a bull, a mature female a cow, and an immature moose of either sex a calf. In Classical Antiquity, the animal was known as () in Greek and in Latin, words probably borrowed from a Germanic language or another language of northern Europe. By the 8th century, during the Early Middle Ages, the species was known as , derived from the Proto-Germanic: *elho-, *elhon- and possibly connected with the . Later, the species became known in Middle English as , , or , appearing in the Latinized form alke, with the spelling alce borrowed directly from . The word "elk" remained in usage because of English-speakers' familiarity with the species in Continental Europe; however, without any living animals around to serve as a reference, the meaning became rather vague, and by the 17th century "elk" had a meaning similar to "large deer". Dictionaries of the 18th century simply described "elk" as a deer that was "as large as a horse". The word "moose" had first entered English by 1606 and is borrowed from the Algonquian languages (compare the Narragansett and Eastern Abenaki ; according to early sources, these were likely derived from moosu, meaning ), and possibly involved forms from multiple languages mutually reinforcing one another. The Proto-Algonquian form was *mo·swa. Description On average, an adult moose stands high at the shoulder, which is more than higher than the next-largest deer on average, the wapiti. The tail is short (6 cm to 8 cm in length) and vestigial in appearance; unlike other ungulates the moose tail is too short to swish away insects. Males (or "bulls") normally weigh from and females (or "cows") typically weigh , depending on racial or clinal as well as individual age or nutritional variations. The head-and-body length is , with the vestigial tail adding only a further . The largest of all the races is the Alaskan subspecies (A. a. gigas), which can stand over at the shoulder, has a span across the antlers of and averages in males and in females. Typically, however, the antlers of a mature bull are between . The largest confirmed size for this species was a bull shot at the Yukon River in September 1897 that weighed and measured high at the shoulder. There have been reported cases of even larger moose, including a bull killed in 2004 that weighed , and a bull that reportedly scaled , but none are authenticated and some may not be considered reliable. Antlers Bull moose have antlers like other members of the deer family. The size and growth rate of antlers is determined by diet and age. Size and symmetry in the number of antler points signals bull moose health and cows may select mates based on antler size and symmetry. Bull moose use their antlers to display dominance, to discourage competition, and to spar or fight rivals. The male's antlers grow as cylindrical beams projecting on each side of the head at right angles to the midline of the skull, and then fork. The lower prong of this fork may be either simple, or divided into two or three tines, with some flattening. Most moose have antlers that are broad and palmate (flat) with tines (points) along the outer edge. Within the ecologic range of the moose in Europe, northern populations display the palmate pattern of antlers, while the antlers of European moose residing the southerly portion of its range are typically of the cervina dendritic pattern and comparatively small, perhaps due to evolutionary pressures of hunting by humans, who prize the large palmate antlers. European moose with antlers intermediate between the palmate and the dendritic form are found in the middle of the north-south range. Moose with antlers have more acute hearing than those without antlers; a study of trophy antlers using a microphone found that the palmate antler acts as a parabolic reflector, amplifying sound at the moose's ear. The antlers of mature Alaskan adult bull moose (5 to 12 years old) have a normal maximum spread greater than . By the age of 13, moose antlers decline in size and symmetry. The widest spread recorded was across. An Alaskan moose also holds the record for the heaviest weight at . Antler beam diameter, not the number of tines, indicates age. In North America, moose (A. a. americanus) antlers are usually larger than those of Eurasian moose and have two lobes on each side, like a butterfly. Eurasian moose antlers resemble a seashell, with a single lobe on each side. In the North Siberian moose (A. a. bedfordiae), the posterior division of the main fork divides into three tines, with no distinct flattening. In the common moose (A. a. alces) this branch usually expands into a broad palmation, with one large tine at the base and a number of smaller snags on the free border. There is, however, a Scandinavian breed of the common moose in which the antlers are simpler and recall those of the East Siberian animals. The palmation appears to be more marked in North American moose than in the typical Scandinavian moose. After the mating season males drop their antlers to conserve energy for the winter. A new set of antlers will then regrow in the spring. Antlers take three to five months to fully develop, making them one of the fastest growing animal organs. Antler growth is "nourished by an extensive system of blood vessels in the skin covering, which contains numerous hair follicles that give it a 'velvet' texture." This requires intense grazing on a highly-nutritious diet. By September the velvet is removed by rubbing and thrashing which changes the colour of the antlers. Immature bulls may not shed their antlers for the winter, but retain them until the following spring. Birds, carnivores and rodents eat dropped antlers as they are full of protein and moose themselves will eat antler velvet for the nutrients. If a bull moose is castrated, either by accidental or chemical means, he will shed his current set of antlers within two weeks and then immediately begin to grow a new set of misshapen and deformed antlers that he will wear the rest of his life without ever shedding again; similarly deformed antlers can result from a deficiency of testosterone caused by cryptorchidism or old age. These deformed antlers are composed of living bone which is still growing or able to grow, since testosterone is needed to stop antler growth; they may take one of two forms. "Cactus antlers" or velericorn antlers usually retain the approximate shape of a normal moose's antlers but have numerous pearl-shaped exostoses on their surface; being made of living bone, they are easily broken but can grow back. Perukes () are constantly growing, tumor-like antlers with a distinctive appearance similar to coral. Like roe deer, moose are more likely to develop perukes, rather than cactus antlers, than the more developed cervine deer, but unlike roe deer, moose do not suffer fatal decalcification of the skull as a result of peruke growth, but rather can support their continued growth until they become too large to be fully supplied with blood. The distinctive-looking perukes (often referred to as "devil's antlers") are the source of several myths and legends among many groups of Inuit as well as several other tribes of indigenous peoples of North America. In extremely rare circumstances, a cow moose may grow antlers. This is usually attributed to a hormone imbalance. Proboscis and olfaction The moose proboscis is distinctive among living cervids due to its large size; it also features nares that can be sealed shut when the moose is browsing aquatic vegetation. The moose proboscis likely evolved as an adaptation to aquatic browsing, with loss of the rhinarium, and development of a superior olfactory column separate from an inferior respiratory column. This separation contributes to the moose's keen sense of smell, which they employ to detect water sources, to find food under snow, and to detect mates or predators. Hooves As with all members of the order Artiodactyla (even-toed ungulates), moose feet have two large keratinized hooves corresponding to the third and fourth toe, with two small posterolateral dewclaws (vestigial digits), corresponding to the second and fifth toe. The hoof of the fourth digit is broader than that of the third digit, while the inner hoof of the third digit is longer than that of the fourth digit. This foot configuration may favor striding on soft ground. The moose hoof splays under load, increasing surface area, which limits sinking of the moose foot into soft ground or snow, and which increases efficiency when swimming. The body weight per footprint surface area of the moose foot is intermediate between that of the pronghorn foot, (which have stiff feet lacking dewclaws—optimized for high-speed running) and the caribou foot (which are more rounded with large dewclaws, optimized for walking in deep snow). The moose's body weight per surface area of footprint is about twice that of the caribou. Skin and fur Moose skin is typical of the deer family. Moose fur consists of four types of hair: eyelashes, whiskers, guard hairs and wool hairs. Hair length and hair density varies according to season, age, and body region. The coat has two layers—a top layer of long guard hairs and a soft wooly undercoat. The guard hairs are hollow and filled with air for better insulation, which also helps them stay afloat when swimming. Dewlap Both male and female moose have a dewlap or bell, which is a fold of skin under the chin. Its exact function is unknown, but some morphologic analyses suggest a cooling (thermoregulatory) function. Other theories include a fitness signal in mating, as a visual and olfactory signal, or as a dominance signal by males, as are the antlers. Ecology and biology Diet The moose is a browsing herbivore and is capable of consuming many types of plant or fruit. The average adult moose needs to consume per day to maintain its body weight. Much of a moose's energy is derived from terrestrial vegetation, mainly consisting of forbs and other non-grasses, and fresh shoots from trees such as willow and birch. As these terrestrial plants are rather low in sodium, as much as half of its diet usually consists of aquatic plants, including lilies and pondweed, which while lower in energy content, provide the moose with its sodium requirements. In winter, moose are often drawn to roadways, to lick salt that is used as a snow and ice melter. A typical moose, weighing , can eat up to of food per day. Moose lack upper front teeth, but have eight sharp incisors on the lower jaw. They also have a tough tongue, lips and gums, which aid in the eating of woody vegetation. Moose have six pairs of large, flat molars and, ahead of those, six pairs of premolars, to grind up their food. A moose's upper lip is very sensitive, to help distinguish between fresh shoots and harder twigs, and is prehensile, for grasping their food. In the summer, moose may use this prehensile lip for grabbing branches and pulling, stripping the entire branch of leaves in a single mouthful, or for pulling forbs, like dandelions, or aquatic plants up by the base, roots and all. A moose's diet often depends on its location, but they seem to prefer the new growths from deciduous trees with a high sugar content, such as white birch, trembling aspen and striped maple, among many others. To reach high branches, a moose may bend small saplings down, using its prehensile lip, mouth or body. For larger trees a moose may stand erect and walk upright on its hind legs, allowing it to reach branches up to or higher above the ground. Moose may consume ferns from time to time. Moose are excellent swimmers and are known to wade into water to eat aquatic plants. This trait serves a second purpose in cooling down the moose on summer days and ridding itself of black flies. Moose are thus attracted to marshes and river banks during warmer months as both provide suitable vegetation to eat and water to wet themselves in. Moose have been known to dive over to reach plants on lake bottoms, and the complex snout may assist the moose in this type of feeding. Moose are the only deer that are capable of feeding underwater. As an adaptation for feeding on plants underwater, the nose is equipped with fatty pads and muscles that close the nostrils when exposed to water pressure, preventing water from entering the nose. Other species can pluck plants from the water too, but these need to raise their heads in order to swallow. Moose are not grazing animals but browsers (concentrate selectors). Like giraffes, moose carefully select foods with less fiber and more concentrations of nutrients. Thus, the moose's digestive system has evolved to accommodate this relatively low-fiber diet. Unlike most hooved, domesticated animals (ruminants), moose cannot digest hay, and feeding it to a moose can be fatal. The moose's varied and complex diet is typically expensive for humans to provide, and free-range moose require a lot of forested hectarage for sustainable survival, which is one of the main reasons moose have never been widely domesticated. Natural predators A full-grown moose has few enemies except Siberian tigers (Panthera tigris tigris) which regularly prey on adult moose, but a pack of gray wolves (Canis lupus) can still pose a threat, especially to females with calves. Brown bears (Ursus arctos) are also known to prey on moose of various sizes and are the only predator besides the wolf to attack moose both in Eurasia and North America. In Western Russia, moose provide about 15% annual estimated dietary energy content for brown bears and are the most important food source for these predators during spring. However, Brown bears are more likely to scavenge a wolf kill or to take young moose than to hunt adult moose on their own. Black bears (Ursus americanus) and cougars (Puma concolor) can be significant predators of moose calves in May and June and can, in rare instances, prey on adults (mainly cows rather than the larger bulls). Wolverines (Gulo gulo) are most likely to eat moose as carrion but have killed moose, including adults, when the large ungulates are weakened by harsh winter conditions. Orcas (Orcinus orca) are the moose's only confirmed marine predator as they have been known to prey on moose and other deer swimming between islands out of North America's Northwest Coast. However, such kills are rare and a matter of opportunity, as moose are not a regular part of the orca diet. There is at least one recorded instance of a moose being scavenged by a Greenland shark (Somniosus microcephalus). In some areas, moose are the primary source of food for wolves. Moose usually flee upon detecting wolves. Wolves usually follow moose at a distance of , occasionally at a distance of . Attacks from wolves against young moose may last seconds, though sometimes they can be drawn out for days with adults. Sometimes, wolves will chase moose into shallow streams or onto frozen rivers, where their mobility is greatly impeded. Moose will sometimes stand their ground and defend themselves by charging at the wolves or lashing out at them with their powerful hooves. Wolves typically kill moose by tearing at their haunches and perineum, causing massive blood loss. Occasionally, a wolf may immobilize a moose by biting its sensitive nose, the pain of which can paralyze a moose. Wolf packs primarily target calves and elderly animals, but can and will take healthy, adult moose. Moose between the ages of two and eight are seldom killed by wolves. Though moose are usually hunted by packs, there are cases in which single wolves have successfully killed healthy, fully-grown moose. Research into moose predation suggests that their response to perceived threats is learned rather than instinctual. In practical terms this means moose are more vulnerable in areas where wolf or bear populations were decimated in the past but are now rebounding. These same studies suggest, however, that moose learn quickly and adapt, fleeing an area if they hear or smell wolves, bears, or scavenger birds such as ravens. Moose are also subject to various diseases and forms of parasitism. In northern Europe, the moose botfly is a parasite whose range seems to be spreading. Parasites Moose typically carry a heavy burden of parasites, both externally and internally. Parasitosis is an important cause of moose morbidity and mortality and also contributes to vulnerability to predators. Ectoparasites of moose include the moose nose bot fly, and winter ticks. Endoparasites of moose include dog tapeworm, meningeal worm, lungworm, and roundworm. Social structure and reproduction Moose are mostly diurnal. They are generally solitary with the strongest bonds between mother and calf. Although moose rarely gather in groups, there may be several in close proximity during the mating season. Rutting and mating occurs in September and October. During the rut, mature bulls will cease feeding completely for a period of approximately two weeks; this fasting behavior has been attributed to neurophysiological changes related to redeployment of olfaction for detection of moose urine and moose cows. The males are polygynous and will seek several females to breed with. During this time both sexes will call to each other. Males produce heavy grunting sounds that can be heard from up to away, while females produce wail-like sounds. Males will fight for access to females. Initially, the males assess which of them is dominant and one bull may retreat, however, the interaction can escalate to a fight using their antlers. Female moose have an eight-month gestation period, usually bearing one calf, or twins if food is plentiful, in May or June. Twinning can run as high as 30% to 40% with good nutrition Newborn moose have fur with a reddish hue in contrast to the brown appearance of an adult. The young will stay with the mother until just before the next young are born. The life span of an average moose is about 15–25 years. Moose populations are stable at 25 calves for every 100 cows at 1 year of age. With availability of adequate nutrition, mild weather, and low predation, moose have a huge potential for population expansion. Aggression Moose are not typically aggressive towards humans, but will be aggressive when provoked or frightened. Moose attack more people than bears and wolves combined, but usually with only minor consequences. In the Americas, moose injure more people than any other wild mammal; worldwide, only hippopotamuses injure more. When harassed or startled by people or in the presence of a dog, moose may charge. Also, as with bears or most wild animals, moose accustomed to being fed by people may act aggressively when denied food. During the fall mating season, bulls may be aggressive toward humans. Cows are protective of young calves and will attack humans who come close, especially if they come between mother and calf. Moose are not territorial, do not view humans as food, and usually will not pursue humans who run away. Moose are unpredictable. They are most likely to attack if annoyed or harassed, or if approached too closely. A moose that has been harassed may vent its anger on anyone in the vicinity, and they often do not make distinctions between their tormentors and innocent passersby. Moose are very limber animals with highly flexible joints and sharp, pointed hooves, and are capable of kicking with both front and back legs. Unlike other large, hoofed mammals, such as horses, moose can kick in all directions, including sideways. Thus, there is no safe side from which to approach. Moose often give warning signs prior to attacking, displaying aggression by means of body language. Maintained eye contact is usually the first sign of aggression, while laid-back ears or a lowered head is a sign of agitation. When the hairs on the back of the moose's neck and shoulders (hackles) stand up, a charge is usually imminent. The Anchorage Visitor Centers warn tourists that "...a moose with its hackles raised is a thing to fear." Moose cows are more likely to emit protest moans when courted by small males. This attracts the attention of large males, promotes male-male competition and violence, reduces harassment of cows by small males, and increases mating opportunities with large males. This in turn means that the cow moose has at least a small degree of control over which bulls she mates with. Moose often show aggression to other animals as well, especially predators. Bears are common predators of moose calves and, rarely, adults. Alaskan moose have been reported to successfully fend off attacks from both black and brown bears. Moose have been known to stomp attacking wolves, which makes them less preferred as prey to the wolves. Moose are fully capable of killing bears and wolves. In one rare event, a female moose killed two adult male wolves. A moose of either sex that is confronted by danger may let out a loud roar, more resembling that of a predator than a prey animal. European moose are often more aggressive than North American moose, such as the moose in Sweden, which often become very agitated at the sight of a predator. However, like all ungulates known to attack predators, the more aggressive individuals are always darker in color, with the darkest coloring usually in areas facing the opponent, thus serving as a natural warning to other animals. Habitat, range, and distribution Habitat Moose require habitat with adequate edible plants (e.g., pond grasses, young trees and shrubs), cover from predators, and protection from extremely hot or cold weather. Moose travel among different habitats with the seasons to address these requirements. Moose are cold-adapted mammals with thickened skin, dense, heat-retaining coat, and a low surface:volume ratio, which provides excellent cold tolerance but poor heat tolerance. Moose survive hot weather by accessing shade or cooling wind, or by immersion in cool water. In hot weather, moose are often found wading or swimming in lakes or ponds. When heat-stressed, moose may fail to adequately forage in summer and may not gain adequate body fat to survive the winter. Also, moose cows may not calve without adequate summer weight gain. Moose require access to both young forest for browsing and mature forest for shelter and cover. Forest disturbed by fire and logging promotes the growth of fodder for moose. Moose also require access to mineral licks, safe places for calving and aquatic feeding sites. Moose avoid areas with little or no snow as this increases the risk of predation by wolves and avoid areas with deep snow, as this impairs mobility. Thus, moose select habitat on the basis of trade-offs between risk of predation, food availability, and snow depth. With reintroduction of bison into boreal forest, there was some concern that bison would compete with moose for winter habitat, and thereby worsen the population decline of moose. However, this does not appear to be a problem. Moose prefer sub-alpine shrublands in early winter, while bison prefer wet sedge valley meadowlands in early winter. In late winter, moose prefer river valleys with deciduous forest cover or alpine terrain above the tree line, while bison preferred wet sedge meadowlands or sunny southern grassy slopes. North America After expanding for most of the 20th century, the moose population of North America has been in steep decline since the 1990s. Populations expanded greatly with improved habitat and protection, but now the moose population is declining rapidly. This decline has been attributed to opening of roads and landscapes into the northern range of moose, allowing deer to become populous in areas where they were not previously common. This encroachment by deer on moose habitat brought moose into contact with previously unfamiliar pathogens, including brainworm and liver fluke, and these parasites are believed to have contributed to the population decline of moose. In North America, the moose range includes almost all of Canada (excluding the arctic and Vancouver Island), most of Alaska, northern and eastern North Dakota, northern New England, the Adirondack Mountain region and Taconic highlands of northeast New York State, the upper Rocky Mountains, northern Minnesota, northern Wisconsin, Michigan's Upper Peninsula, and Isle Royale in Lake Superior. This massive range, containing diverse habitats, contains four of the six North American subspecies. In the West, moose populations extend across Canada (British Columbia and Alberta). Isolated groups have been verified as far south as the mountains of Utah and Colorado and as far west as the Lake Wenatchee area of the Washington Cascades. In the northwestern US, the range includes Wyoming, Montana, Idaho, and smaller areas of Washington, and Oregon. Moose have extended their range southwards in the western Rocky Mountains, with initial sightings in Yellowstone National Park in 1868, and then to the northern slope of the Uinta Mountains in Utah in the first half of the twentieth century. This is the southernmost naturally established moose population in the United States. In 1978, a few breeding pairs were reintroduced in western Colorado, and the state's moose population is now more than 2,400. In northeastern North America, the Eastern moose's history is very well documented: moose meat was a staple in the diet of indigenous peoples for centuries. The common name "moose" was brought into English from the word used by those who lived in present day coastal Rhode Island. The indigenous people often used moose hides for leather and its meat as an ingredient in pemmican, a type of dried jerky used as a source of sustenance in winter or on long journeys. The historical range of the subspecies extended from well into Quebec, the Maritimes, and Eastern Ontario south to include all of New England finally ending in the very northeastern tip of Pennsylvania in the west, cutting off somewhere near the mouth of the Hudson River in the south. The moose has been extinct in much of the eastern U.S. for as long as 150 years, due to colonial era overhunting and destruction of its habitat: Dutch, French, and British colonial sources all attest to its presence in the mid 17th century from Maine south to areas within of present-day Manhattan. However, by the 1870s, only a handful of moose existed in this entire region in very remote pockets of forest; less than 20% of suitable habitat remained. Since the 1980s, however, moose populations have rebounded, thanks to regrowth of plentiful food sources, abandonment of farmland, better land management, clean-up of pollution, and natural dispersal from the Canadian Maritimes and Quebec. South of the Canada–US border, Maine has most of the population with a 2012 headcount of about 76,000 moose. Dispersals from Maine over the years have resulted in healthy, growing populations each in Vermont and New Hampshire, notably near bodies of water and as high up as above sea level in the mountains. In Massachusetts, moose had gone extinct by 1870, but re-colonized the state in the 1960s, with the population expanding from Vermont and New Hampshire; by 2010, the population was estimated at 850–950. Moose reestablished populations in eastern New York and Connecticut and appeared headed south towards the Catskill Mountains, a former habitat. In the Midwest U.S., moose are primarily limited to the upper Great Lakes region, but strays, primarily immature males, have been found as far south as eastern Iowa. For unknown reasons, the moose population is declining rapidly in the Midwest. Moose were successfully introduced on Newfoundland in 1878 and 1904, where they are now the dominant ungulate, and somewhat less successfully on Anticosti Island in the Gulf of Saint Lawrence. Decline in population Since the 1990s, moose populations have declined dramatically in much of temperate North America, although they remain stable in Arctic and subarctic regions. The exact causes of specific die-offs are not determined, but most documented mortality events were due to wolf predation, bacterial infection due to injuries sustained from predators, and parasites from white-tailed deer to which moose have not developed a natural defense, such as liver flukes, brain worms and winter tick infestations. Predation of moose calves by brown bear is also significant. Landscape change from salvage logging of forest damage caused by the mountain pine beetle has resulted in greater foraging in logged areas by female moose, and this is the lead hypothesis as to why the moose population is declining in eastern North American forests, as this likely leads to increased predation. An alternate hypotheses among biologists for generalized, non-hunting declines in moose populations at the southern extent of their range is increasing heat stress brought on by the rapid seasonal temperature upswings as a result of human-induced climate change. Biologists studying moose populations typically use warm-season, heat-stress thresholds of between . However, the minor average temperature increase of 0.83–1.11 °C (1.5–2 °F), over the last 100 years, has resulted in milder winters that induce favorable conditions for ticks, parasites and other invasive species to flourish within the southern range of moose habitat in North America. The moose population in New Hampshire fell from 7,500 in the early 2000s to a 2014 estimate of 4,000 and in Vermont the numbers were down to 2,200 from a high of 5,000 animals in 2005. Much of the decline has been attributed to the winter tick, which, between 2017 and 2019, accounted for 74% of all winter mortality and 91% of winter calf deaths in Vermont. Moose with heavy tick infections will rub their fur down to the skin raw trying to get the ticks off, making them look white when their outer coat rubs off. Locals call them ghost moose. Loss of the insulating winter coat through attempts to rid the moose of winter tick increases the risk of hypothermia in winter. Europe and Asia In Europe, moose are currently found in large numbers throughout Norway, Sweden, Finland, Latvia, Estonia, Poland, with more modest numbers in the southern Czech Republic, Belarus, and northern Ukraine. They are also widespread through Russia on up through the borders with Finland south towards the border with Estonia, Belarus and Ukraine and stretching far away eastwards to the Yenisei River in Siberia. The European moose was native to most temperate areas with suitable habitat on the continent and even Scotland from the end of the last Ice Age, as Europe had a mix of temperate boreal and deciduous forest. Up through Classical times, the species was certainly thriving in both Gaul and Magna Germania, as it appears in military and hunting accounts of the age. However, as the Roman era faded into medieval times, the beast slowly disappeared: soon after the reign of Charlemagne, the moose disappeared from France, where its range extended from Normandy in the north to the Pyrenees in the south. Farther east, it survived in Alsace and the Netherlands until the 9th century as the marshlands in the latter were drained and the forests were cleared away for feudal lands in the former. It was gone from Switzerland by the year 1000, from the western Czech Republic by 1300, from Mecklenburg in Germany by c. 1600, and from Hungary and the Caucasus since the 18th and 19th century, respectively. By the early 20th century, the last strongholds of the European moose appeared to be in Fennoscandian areas and patchy tracts of Russia, with a few migrants found in what is now Estonia and Lithuania. The USSR and Poland managed to restore portions of the range within its borders (such as the 1951 reintroduction into Kampinos National Park and the later 1958 reintroduction in Belarus), but political complications limited the ability to reintroduce it to other portions of its range. Attempts in 1930 and again in 1967 in marshland north of Berlin were unsuccessful. At present in Poland, populations are recorded in the Biebrza river valley, Kampinos, and in Białowieża Forest. It has migrated into other parts of Eastern Europe and has been spotted in eastern and southern Germany. Unsuccessful thus far in recolonizing these areas via natural dispersal from source populations in Poland, Belarus, Ukraine, Czech Republic, and Slovakia, it appears to be having more success migrating south into the Caucasus. It is listed under Appendix III of the Bern Convention. In 2008, two moose were reintroduced into the Scottish Highlands in Alladale Wilderness Reserve. The moose disappeared as a breeding species from Denmark about 4,500 years ago (in the last century, a very small number have lived for periods in Zealand without establishing a population after swimming across the Øresund from Sweden), but in 2016–17 ten were introduced to Lille Vildmose from Sweden. In 2020, this population had increased to about 25 animals. The East Asian moose populations confine themselves mostly to the territory of Russia, with much smaller populations in Mongolia and Northeastern China. Moose populations are relatively stable in Siberia and increasing on the Kamchatka Peninsula. In Mongolia and China, where poaching took a great toll on moose, forcing them to near extinction, they are protected, but enforcement of the policy is weak and demand for traditional medicines derived from deer parts is high. In 1978, the Regional Hunting Department transported 45 young moose to the center of Kamchatka. These moose were brought from Chukotka, home to the largest moose on the planet. Kamchatka now regularly is responsible for the largest trophy moose shot around the world each season. As it is a fertile environment for moose, with a milder climate, less snow, and an abundance of food, moose quickly bred and settled along the valley of the Kamchatka River and many surrounding regions. The population in the past 20 years has risen to over 2,900 animals. The size of the moose varies. Following Bergmann's rule, population in the south (A. a. cameloides) usually grow smaller, while moose in the north and northeast (A. a. buturlini) can match the imposing sizes of the Alaskan moose (A. a. gigas) and are prized by trophy hunters. New Zealand In 1900, an attempt to introduce moose into the Hokitika area failed; then in 1910 ten moose (four bulls and six cows) were introduced into Fiordland. This area is considered a less than suitable habitat, and subsequent low numbers of sightings and kills have led to some presumption of this population's failure. The last proven sighting of a moose in New Zealand was in 1952. However, a moose antler was found in 1972, and DNA tests showed that hair collected in 2002 was from a moose. There has been extensive searching, and while automated cameras failed to capture photographs, evidence was seen of bedding spots, browsing, and antler marks. Evolutionary history Moose are members of the subfamily Capreolinae. Members of the moose lineage extend back into the Pliocene-Early Pleistocene. Some scientists group the moose and all its extinct relatives into one genus, Alces, while others, such as Augusto Azzaroli, restrict Alces to the living species, placing the fossil species into the genera Cervalces (stag moose) and Libralces. The earliest known species in the moose lineage is Libralces gallicus, which lived in the Pliocene-Early Pleistocene. Libralces gallicus came from the warm savannas of Pliocene Europe, with the best-preserved skeletons being found in southern France. L. gallicus was 1.25 times larger than the Alaskan moose in linear dimensions, making it nearly twice as massive. L. gallicus had many striking differences from its modern descendants. It had a longer, narrower snout and a less-developed nasal cavity, more resembling that of a modern deer, lacking any sign of the modern moose-snout. Its face resembled that of the modern wapiti. However, the rest of its skull structure, skeletal structure and teeth bore strong resemblance to those features that are unmistakable in modern moose, indicating a similar diet. Its antlers consisted of a horizontal bar long, with no tines, ending in small palmations. Its skull and neck structure suggest an animal that fought using high-speed impacts, much like the Dall sheep, rather than locking and twisting antlers the way modern moose combat. Their long legs and bone structure suggest an animal that was adapted to running at high speeds over rough terrain. Libralces gallicus was followed by Cervalces carnutorum during the first half of the Early Pleistocene. Cervalces carnutorum was soon followed by a much larger species called Cervalces latifrons (broad-fronted stag-moose), which first appeared during the late Early Pleistocene. Many fossils of Cervalces latifrons have been found across Eurasia. Like its descendants, it inhabited mostly northern latitudes, and was probably well-adapted to the cold. C. latifrons was the largest deer known to have ever existed, standing more than tall at the shoulders. This is bigger than even the Irish elk, which was tall at the shoulders. Its antlers were smaller than the Irish elk's, but comparable in size to those of L. gallicus. However, the antlers had a shorter horizontal bar and larger palmations, more resembling those of a modern moose. Probably sometime in the Middle Pleistocene, Cervalces latifrons migrated into North America, giving rise to the stag moose (Cervalces scotti). The modern moose is thought to have evolved from Cervalces latifrons at around the end of the Middle Pleistocene to the beginning of the Late Pleistocene, probably somewhere in East Asia, with the earliest fossils of the species in Europe dating to the early Late Pleistocene. The modern moose only arrived in North America around 15,000 years ago, at the end of the Late Pleistocene. Populations North America: In Canada: There are an estimated 500,000 to 1,000,000 moose, with 150,000 in Newfoundland in 2007 descended from just four that were introduced in the 1900s. In United States: There are estimated to be around 300,000: Alaska: The state's Department of Fish and Game estimated 200,000 in 2011. Northeast: A wildlife ecologist estimated 50,000 in New York and New England in 2007, with expansion expected. Rocky Mountain states: Wyoming is said to have the largest share in its six-state region, and its Fish and Game Commission estimated 7,692 in 2009. Upper Midwest: Michigan 2000 on Isle Royale (2019) and an estimated 433 (in its Upper Peninsula) in 2011, Wisconsin, 20–40 (close to its border with Michigan) in 2003, Minnesota 5600 in its northeast in 2010, and under 100 in its northwest in 2009; North Dakota closed, due to low moose population, one of its moose-hunting geographic units in 2011, and issued 162 single-kill licenses to hunters, each restricted to one of the remaining nine units. Europe and Asia: Finland: In 2009, there was a summer population of 115,000. Norway: In 2009, there were a winter population of around 120,000. In 2015 31,131 moose were shot. In 1999, a record number of 39,422 moose were shot. Latvia: in 2015, there were 21,000. Estonia: 11,000 - 7,000 Lithuania: around 14,000 in 2016 Poland: 28,000 Czech Republic: maximum of 50 Russia: In 2007, there were approximately 600,000. Sweden: Summer population is estimated to be 300,000–400,000. Around 100,000 are shot each fall. About 10,000 are killed in traffic accidents yearly. Subspecies Relationship with humans History European rock drawings and cave paintings reveal that moose have been hunted since the Stone Age. Excavations in Alby, Sweden, adjacent to the Stora Alvaret have yielded moose antlers in wooden hut remains from 6000 BCE, indicating some of the earliest moose hunting in northern Europe. In northern Scandinavia one can still find remains of trapping pits used for hunting moose. These pits, which can be up to in area and deep, would have been camouflaged with branches and leaves. They would have had steep sides lined with planks, making it impossible for the moose to escape once it fell in. The pits are normally found in large groups, crossing the moose's regular paths and stretching over several km. Remains of wooden fences designed to guide the animals toward the pits have been found in bogs and peat. In Norway, an early example of these trapping devices has been dated to around 3700 BC. Trapping elk in pits is an extremely effective hunting method. As early as the 16th century the Norwegian government tried to restrict their use; nevertheless, the method was in use until the 19th century. The earliest recorded description of the moose is in Julius Caesar's Commentarii de Bello Gallico, where it is described thus: There are also [animals], which are called alces (moose). The shape of these, and the varied color of their skins, is much like roes, but in size they surpass them a little and are destitute of horns, and have legs without joints and ligatures; nor do they lie down for the purpose of rest, nor, if they have been thrown down by any accident, can they raise or lift themselves up. Trees serve as beds to them; they lean themselves against them, and thus reclining only slightly, they take their rest; when the huntsmen have discovered from the footsteps of these animals whither they are accustomed to betake themselves, they either undermine all the trees at the roots, or cut into them so far that the upper part of the trees may appear to be left standing. When they have leant upon them, according to their habit, they knock down by their weight the unsupported trees, and fall down themselves along with them. In book 8, chapter 16 of Pliny the Elder's Natural History from 77 CE, the elk and an animal called achlis, which is presumably the same animal, are described thus: ... there is, also, the moose, which strongly resembles our steers, except that it is distinguished by the length of the ears and of the neck. There is also the achlis, which is produced in the land of Scandinavia; it has never been seen in this city, although we have had descriptions of it from many persons; it is not unlike the moose, but has no joints in the hind leg. Hence, it never lies down, but reclines against a tree while it sleeps; it can only be taken by previously cutting into the tree, and thus laying a trap for it, as otherwise, it would escape through its swiftness. Its upper lip is so extremely large, for which reason it is obliged to go backwards when grazing; otherwise, by moving onwards, the lip would get doubled up. As food Moose are hunted as a game species in many of the countries where they are found. Moose meat tastes, wrote Henry David Thoreau in The Maine Woods, "like tender beef, with perhaps more flavour; sometimes like veal". While the flesh has protein levels similar to those of other comparable red meats (e.g. beef, deer and wapiti), it has a low fat content, and the fat that is present consists of a higher proportion of polyunsaturated fats than saturated fats. Dr. Valerius Geist, who emigrated to Canada from the Soviet Union, wrote in his 1999 book Moose: Behaviour, Ecology, Conservation: Boosting moose populations in Alaska for hunting purposes is one of the reasons given for allowing aerial or airborne methods to remove wolves in designated areas, e.g., Craig Medred: "A kill of 124 wolves would thus translate to [the survival of] 1488 moose or 2976 caribou or some combination thereof". Some scientists believe that this artificial inflation of game populations is actually detrimental to both caribou and moose populations as well as the ecosystem as a whole. This is because studies have shown that when these game populations are artificially boosted, it leads to both habitat destruction and a crash in these populations. Consumption of offal Cadmium levels are high in Finnish moose liver and kidneys, with the result that consumption of these organs from moose more than one year old is prohibited in Finland. As a result of a study reported in 1988, the Ontario Ministry of Natural Resources recommended against the consumption of moose and deer kidneys and livers. Levels of cadmium were found to be considerably higher than in Scandinavia. The New Brunswick Department of Natural Resources advises hunters not to consume cervid offal. Cadmium intake has been found to be elevated amongst all consumers of moose meat, though the meat was found to contribute only slightly to the daily cadmium intake. However the consumption of moose liver or kidneys significantly increased cadmium intake, with the study revealing that heavy consumers of moose organs have a relatively narrow safety margin below the levels which would probably cause adverse health effects. Vehicle collisions The center of mass of a moose is above the hood of most passenger cars. In a collision, the impact crushes the front roof beams and individuals in the front seats. Collisions of this type are frequently fatal; seat belts and airbags offer little protection. In collisions with higher vehicles (such as trucks), most of the deformation is to the front of the vehicle and the passenger compartment is largely spared. Moose collisions have prompted the development of a vehicle test referred to as the "moose test" (, ). A Massachusetts study found that moose–vehicular collisions had a very high human fatality rate and that such collisions caused the death of 3% of the Massachusetts moose population annually. Moose warning signs are used on roads in regions where there is a danger of collision with the animal. The triangular warning signs common in Sweden, Norway, and Finland have become coveted souvenirs among tourists traveling in these countries, causing road authorities so much expense that the moose signs have been replaced with imageless generic warning signs in some regions. In Ontario, Canada, an estimated 265 moose die each year as a result of collision with trains (). Moose–train collisions were more frequent in winters with above-average snowfall. In January 2008, the Norwegian newspaper Aftenposten estimated that some 13,000 moose had died in collisions with Norwegian trains since 2000. The state agency in charge of railroad infrastructure (Jernbaneverket) plans to spend 80 million Norwegian kroner to reduce collision rate in the future by fencing the railways, clearing vegetation from near the tracks, and providing alternative snow-free feeding places for the animals elsewhere. In the Canadian province of New Brunswick, collisions between automobiles and moose are frequent enough that all new highways have fences to prevent moose from accessing the road, as has long been done in Finland, Norway, and Sweden. A demonstration project, Highway 7 between Fredericton and Saint John, which has one of the highest frequencies of moose collisions in the province, did not have these fences until 2008, although it was and continues to be extremely well signed. Newfoundland and Labrador recommended that motorists use caution between dusk and dawn because that is when moose are most active and most difficult to see, increasing the risk of collisions. Local moose sightings are often reported on radio stations so that motorists can take care while driving in particular areas. An electronic "moose detection system" was installed on two sections of the Trans-Canada Highway in Newfoundland in 2011, but the system proved unreliable and was removed in 2015. , the moose population in Newfoundland was increasing along with the number of road accidents. In Sweden, a road will not be fenced unless it experiences at least one moose accident per km per year. In eastern Germany, where the scarce population is slowly increasing, there were two road accidents involving moose since 2000. Domestication Domestication of moose was investigated in the Soviet Union before World War II. Early experiments were inconclusive, but with the creation of a moose farm at Pechora-Ilych Nature Reserve in 1949, a small-scale moose domestication program was started, involving attempts at selective breeding of animals on the basis of their behavioural characteristics. Since 1963, the program has continued at Kostroma Moose Farm, which had a herd of 33 tame moose as of 2003. Although at this stage the farm is not expected to be a profit-making enterprise, it obtains some income from the sale of moose milk and from visiting tourist groups. Its main value, however, is seen in the opportunities it offers for the research in the physiology and behavior of the moose, as well as in the insights it provides into the general principles of animal domestication. In Sweden, there was a debate in the late 18th century about the national value of using the moose as a domestic animal. Among other things, the moose was proposed to be used in postal distribution, and there was a suggestion to develop a moose-mounted cavalry. Such proposals remained unimplemented, mainly because the extensive hunting for moose that was deregulated in the 1790s nearly drove it to extinction. While there have been documented cases of individual moose (eg Älgen Stolta) being used for riding and/or pulling carts and sleds, Björklöf concludes no wide-scale usage has occurred outside fairy tales. Heraldry As one of the Canadian national symbols, the moose occurs on several Canadian coats of arms, including Newfoundland and Labrador, and Ontario. Moose is also a common coat of arms in Europe as well; for example, in Finland, it appears on the coats of arms of Hirvensalmi and Mäntsälä municipalities. The Seal of Michigan features a moose.
Biology and health sciences
Artiodactyla
null
20505
https://en.wikipedia.org/wiki/Magnetic%20tape
Magnetic tape
Magnetic tape is a medium for magnetic storage made of a thin, magnetizable coating on a long, narrow strip of plastic film. It was developed in Germany in 1928, based on the earlier magnetic wire recording from Denmark. Devices that use magnetic tape can with relative ease record and play back audio, visual, and binary computer data. Magnetic tape revolutionized sound recording and reproduction and broadcasting. It allowed radio, which had always been broadcast live, to be recorded for later or repeated airing. Since the early 1950s, magnetic tape has been used with computers to store large quantities of data and is still used for backup purposes. Magnetic tape begins to degrade after 10–20 years and therefore is not an ideal medium for long-term archival storage. The exception is data tape formats like LTO which are specifically designed for long-term archiving. Information in magnetic tapes is often recorded in tracks which are narrow and long areas of information recorded magnetically onto the tape, which are separate from each other and often spaced apart from adjacent tracks. Tracks are often parallel to the length of the tape, in which case they are known as longitudinal tracks, or diagonal relative to the length of the tape in helical scan. There are also transverse scan and arcuate scanning, used in Quadruplex videotape. Azimuth recording is used to reduce or eliminate the spacing that exists between adjacent tracks. Durability While good for short-term use, magnetic tape is highly prone to disintegration. Depending on the environment, this process may begin after 10–20 years. Over time, magnetic tape made in the 1970s and 1980s can suffer from a type of deterioration called sticky-shed syndrome. It is caused by hydrolysis of the binder in the tape and can render the tape unusable. Successors Since the introduction of magnetic tape, other technologies have been developed that can perform the same functions, and therefore, replace it. Such as for example, hard disk drives in computers replacing cassette tape readers such as the Atari Program Recorder and the Commodore Datasette for software, CDs and MiniDiscs replacing cassette tapes for audio, and DVDs replacing VHS tapes. Despite this, technological innovation continues. Sony and IBM continue to advance tape capacity. Uses Audio Magnetic tape was invented for recording sound by Fritz Pfleumer in 1928 in Germany. Because of escalating political tensions and the outbreak of World War II, these developments in Germany were largely kept secret. Although the Allies knew from their monitoring of Nazi radio broadcasts that the Germans had some new form of recording technology, its nature was not discovered until the Allies acquired German recording equipment as they invaded Europe at the end of the war. It was only after the war that Americans, particularly Jack Mullin, John Herbert Orr, and Richard H. Ranger, were able to bring this technology out of Germany and develop it into commercially viable formats. Bing Crosby, an early adopter of the technology, made a large investment in the tape hardware manufacturer Ampex. A wide variety of audiotape recorders and formats have been developed since. Some magnetic tape-based formats include: Reel-to-reel Fidelipac Stereo-Pak (Muntz Stereo-Pak, commonly known as the 4-track cartridge) Perforated (sprocketed) film audio magnetic tape (sepmag, perfotape, sound follower tape, magnetic film) 8-track tape Compact Cassette Elcaset RCA tape cartridge Mini-Cassette Microcassette Picocassette NT (cassette) ProDigi Digital Audio Stationary Head Digital Audio Tape Digital Compact Cassette Video Videotape is magnetic tape used for storing video and usually sound in addition. Information stored can be in the form of either an analog or digital signal. Videotape is used in both video tape recorders (VTRs) and, more commonly, videocassette recorders (VCRs) and camcorders. Videotapes have also been used for storing scientific or medical data, such as the data produced by an electrocardiogram. Some magnetic tape-based formats include: Quadruplex videotape Ampex 2-inch helical VTR Type A videotape IVC videotape format Type B videotape Type C videotape EIAJ-1 U-matic Video Cassette Recording Cartrivision VHS VHS-C S-VHS Digital S W-VHS D-VHS Video 2000 V-Cord VX (videocassette format) Betamax Compact Video Cassette Betacam Betacam SP Digital Betacam Betacam SX MPEG IMX HDCAM HDCAM SR M (videocassette format) MII (videocassette format) UniHi D-1 (Sony) DCT (videocassette format) D-2 (video) D-3 (video) D5 HD D6 HDTV VTR Video8 Hi8 Digital8 DV MiniDV DVCAM DVCPRO DVCPRO50 DVCPRO Progressive DVCPRO HD HDV MicroMV Computer data
Technology
Data storage
null
20545
https://en.wikipedia.org/wiki/Mirror
Mirror
A mirror, also known as a looking glass, is an object that reflects an image. Light that bounces off a mirror will show an image of whatever is in front of it, when focused through the lens of the eye or a camera. Mirrors reverse the direction of the image in an equal yet opposite angle from which the light shines upon it. This allows the viewer to see themselves or objects behind them, or even objects that are at an angle from them but out of their field of view, such as around a corner. Natural mirrors have existed since prehistoric times, such as the surface of water, but people have been manufacturing mirrors out of a variety of materials for thousands of years, like stone, metals, and glass. In modern mirrors, metals like silver or aluminium are often used due to their high reflectivity, applied as a thin coating on glass because of its naturally smooth and very hard surface. A mirror is a wave reflector. Light consists of waves, and when light waves reflect from the flat surface of a mirror, those waves retain the same degree of curvature and vergence, in an equal yet opposite direction, as the original waves. This allows the waves to form an image when they are focused through a lens, just as if the waves had originated from the direction of the mirror. The light can also be pictured as rays (imaginary lines radiating from the light source, that are always perpendicular to the waves). These rays are reflected at an equal yet opposite angle from which they strike the mirror (incident light). This property, called specular reflection, distinguishes a mirror from objects that diffuse light, breaking up the wave and scattering it in many directions (such as flat-white paint). Thus, a mirror can be any surface in which the texture or roughness of the surface is smaller (smoother) than the wavelength of the waves. When looking at a mirror, one will see a mirror image or reflected image of objects in the environment, formed by light emitted or scattered by them and reflected by the mirror towards one's eyes. This effect gives the illusion that those objects are behind the mirror, or (sometimes) in front of it. When the surface is not flat, a mirror may behave like a reflecting lens. A plane mirror yields a real-looking undistorted image, while a curved mirror may distort, magnify, or reduce the image in various ways, while keeping the lines, contrast, sharpness, colors, and other image properties intact. A mirror is commonly used for inspecting oneself, such as during personal grooming; hence the old-fashioned name "looking glass". This use, which dates from prehistory, overlaps with uses in decoration and architecture. Mirrors are also used to view other items that are not directly visible because of obstructions; examples include rear-view mirrors in vehicles, security mirrors in or around buildings, and dentist's mirrors. Mirrors are also used in optical and scientific apparatus such as telescopes, lasers, cameras, periscopes, and industrial machinery. According to superstitions breaking a mirror is said to bring seven years of bad luck. The terms "mirror" and "reflector" can be used for objects that reflect any other types of waves. An acoustic mirror reflects sound waves. Objects such as walls, ceilings, or natural rock-formations may produce echos, and this tendency often becomes a problem in acoustical engineering when designing houses, auditoriums, or recording studios. Acoustic mirrors may be used for applications such as parabolic microphones, atmospheric studies, sonar, and seafloor mapping. An atomic mirror reflects matter waves and can be used for atomic interferometry and atomic holography. History Prehistory The first mirrors used by humans were most likely pools of still water, or shiny stones. The requirements for making a good mirror are a surface with a very high degree of flatness (preferably but not necessarily with high reflectivity), and a surface roughness smaller than the wavelength of the light. The earliest manufactured mirrors were pieces of polished stone such as obsidian, a naturally occurring volcanic glass. Examples of obsidian mirrors found at Çatalhöyük in Anatolia (modern-day Turkey) have been dated to around 6000 BCE. Mirrors of polished copper were crafted in Mesopotamia from 4000 BCE, and in ancient Egypt from around 3000 BCE. Polished stone mirrors from Central and South America date from around 2000 BCE onwards. Bronze Age to Early Middle Ages By the Bronze Age most cultures were using mirrors made from polished discs of bronze, copper, silver, or other metals. The people of Kerma in Nubia were skilled in the manufacturing of mirrors. Remains of their bronze kilns have been found within the temple of Kerma. In China, bronze mirrors were manufactured from around 2000 BC, some of the earliest bronze and copper examples being produced by the Qijia culture. Such metal mirrors remained the norm through to Greco-Roman Antiquity and throughout the Middle Ages in Europe. During the Roman Empire silver mirrors were in wide use by servants. Speculum metal is a highly reflective alloy of copper and tin that was used for mirrors until a couple of centuries ago. Such mirrors may have originated in China and India. Mirrors of speculum metal or any precious metal were hard to produce and were only owned by the wealthy. Common metal mirrors tarnished and required frequent polishing. Bronze mirrors had low reflectivity and poor color rendering, and stone mirrors were much worse in this regard. These defects explain the New Testament reference in 1 Corinthians 13 to seeing "as in a mirror, darkly." The Greek philosopher Socrates urged young people to look at themselves in mirrors so that, if they were beautiful, they would become worthy of their beauty, and if they were ugly, they would know how to hide their disgrace through learning. Glass began to be used for mirrors in the 1st century CE, with the development of soda-lime glass and glass blowing. The Roman scholar Pliny the Elder claims that artisans in Sidon (modern-day Lebanon) were producing glass mirrors coated with lead or gold leaf in the back. The metal provided good reflectivity, and the glass provided a smooth surface and protected the metal from scratches and tarnishing. However, there is no archeological evidence of glass mirrors before the third century. These early glass mirrors were made by blowing a glass bubble, and then cutting off a small circular section from 10 to 20 cm in diameter. Their surface was either concave or convex, and imperfections tended to distort the image. Lead-coated mirrors were very thin to prevent cracking by the heat of the molten metal. Due to the poor quality, high cost, and small size of glass mirrors, solid-metal mirrors (primarily of steel) remained in common use until the late nineteenth century. Silver-coated metal mirrors were developed in China as early as 500 CE. The bare metal was coated with an amalgam, then heated until the mercury boiled away. Middle Ages and Renaissance The evolution of glass mirrors in the Middle Ages followed improvements in glassmaking technology. Glassmakers in France made flat glass plates by blowing glass bubbles, spinning them rapidly to flatten them, and cutting rectangles out of them. A better method, developed in Germany and perfected in Venice by the 16th century, was to blow a cylinder of glass, cut off the ends, slice it along its length, and unroll it onto a flat hot plate. Venetian glassmakers also adopted lead glass for mirrors, because of its crystal-clarity and its easier workability. During the early European Renaissance, a fire-gilding technique developed to produce an even and highly reflective tin coating for glass mirrors. The back of the glass was coated with a tin-mercury amalgam, and the mercury was then evaporated by heating the piece. This process caused less thermal shock to the glass than the older molten-lead method. The date and location of the discovery is unknown, but by the 16th century Venice was a center of mirror production using this technique. These Venetian mirrors were up to square. For a century, Venice retained the monopoly of the tin amalgam technique. Venetian mirrors in richly decorated frames served as luxury decorations for palaces throughout Europe, and were very expensive. For example, in the late seventeenth century, the Countess de Fiesque was reported to have traded an entire wheat farm for a mirror, considering it a bargain. However, by the end of that century the secret was leaked through industrial espionage. French workshops succeeded in large-scale industrialization of the process, eventually making mirrors affordable to the masses, in spite of the toxicity of mercury's vapor. Industrial Revolution The invention of the ribbon machine in the late Industrial Revolution allowed modern glass panes to be produced in bulk. The Saint-Gobain factory, founded by royal initiative in France, was an important manufacturer, and Bohemian and German glass, often rather cheaper, was also important. The invention of the silvered-glass mirror is credited to German chemist Justus von Liebig in 1835. His wet deposition process involved the deposition of a thin layer of metallic silver onto glass through the chemical reduction of silver nitrate. This silvering process was adapted for mass manufacturing and led to the greater availability of affordable mirrors. Contemporary technologies Mirrors are often produced by the wet deposition of silver, or sometimes nickel or chromium (the latter used most often in automotive mirrors) via electroplating directly onto the glass substrate. Glass mirrors for optical instruments are usually produced by vacuum deposition methods. These techniques can be traced to observations in the 1920s and 1930s that metal was being ejected from electrodes in gas discharge lamps and condensed on the glass walls forming a mirror-like coating. The phenomenon, called sputtering, was developed into an industrial metal-coating method with the development of semiconductor technology in the 1970s. A similar phenomenon had been observed with incandescent light bulbs: the metal in the hot filament would slowly sublimate and condense on the bulb's walls. This phenomenon was developed into the method of evaporation coating by Pohl and Pringsheim in 1912. John D. Strong used evaporation coating to make the first aluminium-coated telescope mirrors in the 1930s. The first dielectric mirror was created in 1937 by Auwarter using evaporated rhodium. The metal coating of glass mirrors is usually protected from abrasion and corrosion by a layer of paint applied over it. Mirrors for optical instruments often have the metal layer on the front face, so that the light does not have to cross the glass twice. In these mirrors, the metal may be protected by a thin transparent coating of a non-metallic (dielectric) material. The first metallic mirror to be enhanced with a dielectric coating of silicon dioxide was created by Hass in 1937. In 1939 at the Schott Glass company, Walter Geffcken invented the first dielectric mirrors to use multilayer coatings. Burning mirrors The Greek in Classical Antiquity were familiar with the use of mirrors to concentrate light. Parabolic mirrors were described and studied by the mathematician Diocles in his work On Burning Mirrors. Ptolemy conducted a number of experiments with curved polished iron mirrors, and discussed plane, convex spherical, and concave spherical mirrors in his Optics. Parabolic mirrors were also described by the Caliphate mathematician Ibn Sahl in the tenth century. Types of mirrors Mirrors can be classified in many ways; including by shape, support, reflective materials, manufacturing methods, and intended application. By shape Typical mirror shapes are planar and curved mirrors. The surface of curved mirrors is often a part of a sphere. Mirrors that are meant to precisely concentrate parallel rays of light into a point are usually made in the shape of a paraboloid of revolution instead; they are used in telescopes (from radio waves to X-rays), in antennas to communicate with broadcast satellites, and in solar furnaces. A segmented mirror, consisting of multiple flat or curved mirrors, properly placed and oriented, may be used instead. Mirrors that are intended to concentrate sunlight onto a long pipe may be a circular cylinder or of a parabolic cylinder. By structural material The most common structural material for mirrors is glass, due to its transparency, ease of fabrication, rigidity, hardness, and ability to take a smooth finish. Back-silvered mirrors The most common mirrors consist of a plate of transparent glass, with a thin reflective layer on the back (the side opposite to the incident and reflected light) backed by a coating that protects that layer against abrasion, tarnishing, and corrosion. The glass is usually soda-lime glass, but lead glass may be used for decorative effects, and other transparent materials may be used for specific applications. A plate of transparent plastic may be used instead of glass, for lighter weight or impact resistance. Alternatively, a flexible transparent plastic film may be bonded to the front and/or back surface of the mirror, to prevent injuries in case the mirror is broken. Lettering or decorative designs may be printed on the front face of the glass, or formed on the reflective layer. The front surface may have an anti-reflection coating. Front-silvered mirrors Mirrors which are reflective on the front surface (the same side of the incident and reflected light) may be made of any rigid material. The supporting material does not necessarily need to be transparent, but telescope mirrors often use glass anyway. Often a protective transparent coating is added on top of the reflecting layer, to protect it against abrasion, tarnishing, and corrosion, or to absorb certain wavelengths. Flexible mirrors Thin flexible plastic mirrors are sometimes used for safety, since they cannot shatter or produce sharp flakes. Their flatness is achieved by stretching them on a rigid frame. These usually consist of a layer of evaporated aluminium between two thin layers of transparent plastic. By reflective material In common mirrors, the reflective layer is usually some metal like silver, tin, nickel, or chromium, deposited by a wet process; or aluminium, deposited by sputtering or evaporation in vacuum. The reflective layer may also be made of one or more layers of transparent materials with suitable indices of refraction. The structural material may be a metal, in which case the reflecting layer may be just the surface of the same. Metal concave dishes are often used to reflect infrared light (such as in space heaters) or microwaves (as in satellite TV antennas). Liquid metal telescopes use a surface of liquid metal such as mercury. Mirrors that reflect only part of the light, while transmitting some of the rest, can be made with very thin metal layers or suitable combinations of dielectric layers. They are typically used as beamsplitters. A dichroic mirror, in particular, has surface that reflects certain wavelengths of light, while letting other wavelengths pass through. A cold mirror is a dichroic mirror that efficiently reflects the entire visible light spectrum while transmitting infrared wavelengths. A hot mirror is the opposite: it reflects infrared light while transmitting visible light. Dichroic mirrors are often used as filters to remove undesired components of the light in cameras and measuring instruments. In X-ray telescopes, the X-rays reflect off a highly precise metal surface at almost grazing angles, and only a small fraction of the rays are reflected. In flying relativistic mirrors conceived for X-ray lasers, the reflecting surface is a spherical shockwave (wake wave) created in a low-density plasma by a very intense laser-pulse, and moving at an extremely high velocity. Nonlinear optical mirrors A phase-conjugating mirror uses nonlinear optics to reverse the phase difference between incident beams. Such mirrors may be used, for example, for coherent beam combination. The useful applications are self-guiding of laser beams and correction of atmospheric distortions in imaging systems. Physical principles When a sufficiently narrow beam of light is reflected at a point of a surface, the surface's normal direction will be the bisector of the angle formed by the two beams at that point. That is, the direction vector towards the incident beams's source, the normal vector , and direction vector of the reflected beam will be coplanar, and the angle between and will be equal to the angle of incidence between and , but of opposite sign. This property can be explained by the physics of an electromagnetic plane wave that is incident to a flat surface that is electrically conductive or where the speed of light changes abruptly, as between two materials with different indices of refraction. When parallel beams of light are reflected on a plane surface, the reflected rays will be parallel too. If the reflecting surface is concave, the reflected beams will be convergent, at least to some extent and for some distance from the surface. A convex mirror, on the other hand, will reflect parallel rays towards divergent directions. More specifically, a concave parabolic mirror (whose surface is a part of a paraboloid of revolution) will reflect rays that are parallel to its axis into rays that pass through its focus. Conversely, a parabolic concave mirror will reflect any ray that comes from its focus towards a direction parallel to its axis. If a concave mirror surface is a part of a prolate ellipsoid, it will reflect any ray coming from one focus toward the other focus. A convex parabolic mirror, on the other hand, will reflect rays that are parallel to its axis into rays that seem to emanate from the focus of the surface, behind the mirror. Conversely, it will reflect incoming rays that converge toward that point into rays that are parallel to the axis. A convex mirror that is part of a prolate ellipsoid will reflect rays that converge towards one focus into divergent rays that seem to emanate from the other focus. Spherical mirrors do not reflect parallel rays to rays that converge to or diverge from a single point, or vice versa, due to spherical aberration. However, a spherical mirror whose diameter is sufficiently small compared to the sphere's radius will behave very similarly to a parabolic mirror whose axis goes through the mirror's center and the center of that sphere; so that spherical mirrors can substitute for parabolic ones in many applications. A similar aberration occurs with parabolic mirrors when the incident rays are parallel among themselves but not parallel to the mirror's axis, or are divergent from a point that is not the focus – as when trying to form an image of an object that is near the mirror or spans a wide angle as seen from it. However, this aberration can be sufficiently small if the object image is sufficiently far from the mirror and spans a sufficiently small angle around its axis. Mirror images Mirrors reflect an image to the observer. However, unlike a projected image on a screen, an image does not actually exist on the surface of the mirror. For example, when two people look at each other in a mirror, both see different images on the same surface. When the light waves converge through the lens of the eye they interfere with each other to form the image on the surface of the retina, and since both viewers see waves coming from different directions, each sees a different image in the same mirror. Thus, the images observed in a mirror depend upon the angle of the mirror with respect to the eye. The angle between the object and the observer is always twice the angle between the eye and the normal, or the direction perpendicular to the surface. This allows animals with binocular vision to see the reflected image with depth perception and in three dimensions. The mirror forms a virtual image of whatever is in the opposite angle from the viewer, meaning that objects in the image appear to exist in a direct line of sight—behind the surface of the mirror—at an equal distance from their position in front of the mirror. Objects behind the observer, or between the observer and the mirror, are reflected back to the observer without any actual change in orientation; the light waves are simply reversed in a direction perpendicular to the mirror. However, when viewer is facing the object and the mirror is at an angle between them, the image appears inverted 180° along the direction of the angle. Objects viewed in a (plane) mirror will appear laterally inverted (e.g., if one raises one's right hand, the image's left hand will appear to go up in the mirror), but not vertically inverted (in the image a person's head still appears above their body). However, a mirror does not actually "swap" left and right any more than it swaps top and bottom. A mirror swaps front and back. To be precise, it reverses the object in the direction perpendicular to the mirror surface (the normal), turning the three dimensional image inside out (the way a glove stripped off the hand can be turned inside out, turning a left-hand glove into a right-hand glove or vice versa). When a person raises their left hand, the actual left hand raises in the mirror, but gives the illusion of a right hand raising because the imaginary person in the mirror is literally inside-out, hand and all. If the person stands side-on to a mirror, the mirror really does reverse left and right hands, that is, objects that are physically closer to the mirror always appear closer in the virtual image, and objects farther from the surface always appear symmetrically farther away regardless of angle. Looking at an image of oneself with the front-back axis flipped results in the perception of an image with its left-right axis flipped. When reflected in the mirror, a person's right hand remains directly opposite their real right hand, but it is perceived by the mind as the left hand in the image. When a person looks into a mirror, the image is actually front-back reversed (inside-out), which is an effect similar to the hollow-mask illusion. Notice that a mirror image is fundamentally different from the object (inside-out) and cannot be reproduced by simply rotating the object. An object and its mirror image are said to be chiral. For things that may be considered as two-dimensional objects (like text), front-back reversal cannot usually explain the observed reversal. An image is a two-dimensional representation of a three-dimensional space, and because it exists in a two-dimensional plane, an image can be viewed from front or back. In the same way that text on a piece of paper appears reversed if held up to a light and viewed from behind, text held facing a mirror will appear reversed, because the image of the text is still facing away from the observer. Another way to understand the reversals observed in images of objects that are effectively two-dimensional is that the inversion of left and right in a mirror is due to the way human beings perceive their surroundings. A person's reflection in a mirror appears to be a real person facing them, but for that person to really face themselves (i.e.: twins) one would have to physically turn and face the other, causing an actual swapping of right and left. A mirror causes an illusion of left-right reversal because left and right were not swapped when the image appears to have turned around to face the viewer. The viewer's egocentric navigation (left and right with respect to the observer's point of view; i.e.: "my left...") is unconsciously replaced with their allocentric navigation (left and right as it relates another's point of view; "...your right") when processing the virtual image of the apparent person behind the mirror. Likewise, text viewed in a mirror would have to be physically turned around, facing the observer and away from the surface, actually swapping left and right, to be read in the mirror. Optical properties Reflectivity The reflectivity of a mirror is determined by the percentage of reflected light per the total of the incident light. The reflectivity may vary with wavelength. All or a portion of the light not reflected is absorbed by the mirror, while in some cases a portion may also transmit through. Although some small portion of the light will be absorbed by the coating, the reflectivity is usually higher for first-surface mirrors, eliminating both reflection and absorption losses from the substrate. The reflectivity is often determined by the type and thickness of the coating. When the thickness of the coating is sufficient to prevent transmission, all of the losses occur due to absorption. Aluminium is harder and more resistant to tarnishing than silver, and will reflect 85 to 90% of the light in the visible to near-ultraviolet range, but experiences a drop in its reflectance between 800 and 900 nm. Gold is very soft and easily scratched, but does not tarnish. Gold is greater than 96% reflective to near and far-infrared light between 800 and 12000 nm, but poorly reflects visible light with wavelengths shorter than 600 nm (yellow). Silver is expensive, soft, and quickly tarnishes, but has the highest reflectivity in the visual to near-infrared of any metal. Silver can reflect up to 98 or 99% of light to wavelengths as long as 2000 nm, but loses nearly all reflectivity at wavelengths shorter than 350 nm. Dielectric mirrors can reflect greater than 99.99% of light, but only for a narrow range of wavelengths, ranging from a bandwidth of only 10 nm to as wide as 100 nm for tunable lasers. However, dielectric coatings can also enhance the reflectivity of metallic coatings and protect them from scratching or tarnishing. Dielectric materials are typically very hard and relatively cheap, however the number of coats needed generally makes it an expensive process. In mirrors with low tolerances, the coating thickness may be reduced to save cost, and simply covered with paint to absorb transmission. Surface quality Surface quality, or surface accuracy, measures the deviations from a perfect, ideal surface shape. Increasing the surface quality reduces distortion, artifacts, and aberration in images, and helps increase coherence, collimation, and reduce unwanted divergence in beams. For plane mirrors, this is often described in terms of flatness, while other surface shapes are compared to an ideal shape. The surface quality is typically measured with items like interferometers or optical flats, and are usually measured in wavelengths of light (λ). These deviations can be much larger or much smaller than the surface roughness. A normal household-mirror made with float glass may have flatness tolerances as low as 9–14λ per inch (25.4 mm), equating to a deviation of 5600 through 8800 nanometers from perfect flatness. Precision ground and polished mirrors intended for lasers or telescopes may have tolerances as high as λ/50 (1/50 of the wavelength of the light, or around 12 nm) across the entire surface. The surface quality can be affected by factors such as temperature changes, internal stress in the substrate, or even bending effects that occur when combining materials with different coefficients of thermal expansion, similar to a bimetallic strip. Surface roughness Surface roughness describes the texture of the surface, often in terms of the depth of the microscopic scratches left by the polishing operations. Surface roughness determines how much of the reflection is specular and how much diffuses, controlling how clear or cloudy the image will be. For perfectly specular reflection, the surface roughness must be kept smaller than the wavelength of the light. Microwaves, which sometimes have a wavelength greater than an inch (~25 mm) can reflect specularly off a metal screen-door, continental ice-sheets, or desert sand, while visible light, having wavelengths of only a few hundred nanometers (a few hundred-thousandths of an inch), must meet a very smooth surface to produce specular reflection. For wavelengths that are approaching or are even shorter than the diameter of the atoms, such as X-rays, specular reflection can only be produced by surfaces that are at a grazing incidence from the rays. Surface roughness is typically measured in microns, wavelength, or grit size, with ~80,000–100,000 grit or ~½λ–¼λ being "optical quality". Transmissivity Transmissivity is determined by the percentage of light transmitted per the incident light. Transmissivity is usually the same from both first and second surfaces. The combined transmitted and reflected light, subtracted from the incident light, measures the amount absorbed by both the coating and substrate. For transmissive mirrors, such as one-way mirrors, beam splitters, or laser output couplers, the transmissivity of the mirror is an important consideration. The transmissivity of metallic coatings are often determined by their thickness. For precision beam-splitters or output couplers, the thickness of the coating must be kept at very high tolerances to transmit the proper amount of light. For dielectric mirrors, the thickness of the coat must always be kept to high tolerances, but it is often more the number of individual coats that determine the transmissivity. For the substrate, the material used must also have good transmissivity to the chosen wavelengths. Glass is a suitable substrate for most visible-light applications, but other substrates such as zinc selenide or synthetic sapphire may be used for infrared or ultraviolet wavelengths. Wedge Wedge errors are caused by the deviation of the surfaces from perfect parallelism. An optical wedge is the angle formed between two plane-surfaces (or between the principle planes of curved surfaces) due to manufacturing errors or limitations, causing one edge of the mirror to be slightly thicker than the other. Nearly all mirrors and optics with parallel faces have some slight degree of wedge, which is usually measured in seconds or minutes of arc. For first-surface mirrors, wedges can introduce alignment deviations in mounting hardware. For second-surface or transmissive mirrors, wedges can have a prismatic effect on the light, deviating its trajectory or, to a very slight degree, its color, causing chromatic and other forms of aberration. In some instances, a slight wedge is desirable, such as in certain laser systems where stray reflections from the uncoated surface are better dispersed than reflected back through the medium. Surface defects Surface defects are small-scale, discontinuous imperfections in the surface smoothness. Surface defects are larger (in some cases much larger) than the surface roughness, but only affect small, localized portions of the entire surface. These are typically found as scratches, digs, pits (often from bubbles in the glass), sleeks (scratches from prior, larger grit polishing operations that were not fully removed by subsequent polishing grits), edge chips, or blemishes in the coating. These defects are often an unavoidable side-effect of manufacturing limitations, both in cost and machine precision. If kept low enough, in most applications these defects will rarely have any adverse effect, unless the surface is located at an image plane where they will show up directly. For applications that require extremely low scattering of light, extremely high reflectance, or low absorption due to high energy levels that could destroy the mirror, such as lasers or Fabry-Perot interferometers, the surface defects must be kept to a minimum. Manufacturing Mirrors are usually manufactured by either polishing a naturally reflective material, such as speculum metal, or by applying a reflective coating to a suitable polished substrate. In some applications, generally those that are cost-sensitive or that require great durability, such as for mounting in a prison cell, mirrors may be made from a single, bulk material such as polished metal. However, metals consist of small crystals (grains) separated by grain boundaries that may prevent the surface from attaining optical smoothness and uniform reflectivity. Coating Silvering The coating of glass with a reflective layer of a metal is generally called "silvering", even though the metal may not be silver. Currently the main processes are electroplating, "wet" chemical deposition, and vacuum deposition. Front-coated metal mirrors achieve reflectivities of 90–95% when new. Dielectric coating Applications requiring higher reflectivity or greater durability, where wide bandwidth is not essential, use dielectric coatings, which can achieve reflectivities as high as 99.997% over a limited range of wavelengths. Because they are often chemically stable and do not conduct electricity, dielectric coatings are almost always applied by methods of vacuum deposition, and most commonly by evaporation deposition. Because the coatings are usually transparent, absorption losses are negligible. Unlike with metals, the reflectivity of the individual dielectric-coatings is a function of Snell's law known as the Fresnel equations, determined by the difference in refractive index between layers. Therefore, the thickness and index of the coatings can be adjusted to be centered on any wavelength. Vacuum deposition can be achieved in a number of ways, including sputtering, evaporation deposition, arc deposition, reactive-gas deposition, and ion plating, among many others. Shaping and polishing Tolerances Mirrors can be manufactured to a wide range of engineering tolerances, including reflectivity, surface quality, surface roughness, or transmissivity, depending on the desired application. These tolerances can range from wide, such as found in a normal household-mirror, to extremely narrow, like those used in lasers or telescopes. Tightening the tolerances allows better and more precise imaging or beam transmission over longer distances. In imaging systems this can help reduce anomalies (artifacts), distortion or blur, but at a much higher cost. Where viewing distances are relatively close or high precision is not a concern, wider tolerances can be used to make effective mirrors at affordable costs. Applications Personal grooming Mirrors are commonly used as aids to personal grooming. They may range from small sizes (portable), to full body sized; they may be handheld, mobile, fixed or adjustable. A classic example of an adjustable mirror is the cheval glass, which the user can tilt. Safety and easier viewing Convex mirrors Convex mirrors provide a wider field of view than flat mirrors, and are often used on vehicles, especially large trucks, to minimize blind spots. They are sometimes placed at road junctions, and at corners of sites such as parking lots to allow people to see around corners to avoid crashing into other vehicles or shopping carts. They are also sometimes used as part of security systems, so that a single video camera can show more than one angle at a time. Convex mirrors as decoration are used in interior design to provide a predominantly experiential effect. Mouth mirrors or "dental mirrors" Dentists use mouth mirrors or "dental mirrors" to allow indirect vision and lighting within the mouth. Their reflective surfaces may be either flat or curved. Mouth mirrors are also commonly used by mechanics to allow vision in tight spaces and around corners in equipment. Rear-view mirrors Rear-view mirrors are widely used in and on vehicles (such as automobiles, or bicycles), to allow drivers to see other vehicles coming up behind them. On rear-view sunglasses, the left end of the left glass and the right end of the right glass work as mirrors. One-way mirrors and windows One-way mirrors One-way mirrors (also called two-way mirrors) work by overwhelming dim transmitted light with bright reflected light. A true one-way mirror that actually allows light to be transmitted in one direction only without requiring external energy is not possible as it violates the second law of thermodynamics. One-way windows One-way windows can be made to work with polarized light in the laboratory without violating the second law. This is an apparent paradox that stumped some great physicists, although it does not allow a practical one-way mirror for use in the real world. Optical isolators are one-way devices that are commonly used with lasers. Signalling With the sun as the light source, a mirror can be used to signal by variations in the orientation of the mirror. The signal can be used over long distances, possibly up to on a clear day. Native American tribes and numerous militaries used this technique to transmit information between distant outposts. Mirrors can also be used to attract the attention of search-and-rescue parties. Specialized types of mirrors are available and are often included in military survival kits. Technology Televisions and projectors Microscopic mirrors are a core element of many of the largest high-definition televisions and video projectors. A common technology of this type is Texas Instruments' DLP. A DLP chip is a postage stamp-sized microchip whose surface is an array of millions of microscopic mirrors. The picture is created as the individual mirrors move to either reflect light toward the projection surface (pixel on), or toward a light-absorbing surface (pixel off). Other projection technologies involving mirrors include LCoS. Like a DLP chip, LCoS is a microchip of similar size, but rather than millions of individual mirrors, there is a single mirror that is actively shielded by a liquid crystal matrix with up to millions of pixels. The picture, formed as light, is either reflected toward the projection surface (pixel on), or absorbed by the activated LCD pixels (pixel off). LCoS-based televisions and projectors often use 3 chips, one for each primary color. Large mirrors are used in rear-projection televisions. Light (for example from a DLP as discussed above) is "folded" by one or more mirrors so that the television set is compact. Optical discs Optical discs are modified mirrors which encode binary data as a series of physical pits and lands on an inner layer between the metal backing and outer plastic surface. The data is read and decoded by observing distortions in a reflected laser beam caused by the physical variations in the inner layer. Optical discs typically use aluminum backing like conventional mirrors, though ones with silver and gold backings also exist. Solar power Mirrors are integral parts of a solar power plant. The one shown in the adjacent picture uses concentrated solar power from an array of parabolic troughs. Instruments Telescopes and other precision instruments use front silvered or first surface mirrors, where the reflecting surface is placed on the front (or first) surface of the glass (this eliminates reflection from glass surface ordinary back mirrors have). Some of them use silver, but most are aluminium, which is more reflective at short wavelengths than silver. All of these coatings are easily damaged and require special handling. They reflect 90% to 95% of the incident light when new. The coatings are typically applied by vacuum deposition. A protective overcoat is usually applied before the mirror is removed from the vacuum, because the coating otherwise begins to corrode as soon as it is exposed to oxygen and humidity in air. Front silvered mirrors have to be resurfaced occasionally to maintain their quality. There are optical mirrors such as mangin mirrors that are second surface mirrors (reflective coating on the rear surface) as part of their optical designs, usually to correct optical aberrations. The reflectivity of the mirror coating can be measured using a reflectometer and for a particular metal it will be different for different wavelengths of light. This is exploited in some optical work to make cold mirrors and hot mirrors. A cold mirror is made by using a transparent substrate and choosing a coating material that is more reflective to visible light and more transmissive to infrared light. A hot mirror is the opposite, the coating preferentially reflects infrared. Mirror surfaces are sometimes given thin film overcoatings both to retard degradation of the surface and to increase their reflectivity in parts of the spectrum where they will be used. For instance, aluminium mirrors are commonly coated with silicon dioxide or magnesium fluoride. The reflectivity as a function of wavelength depends on both the thickness of the coating and on how it is applied. For scientific optical work, dielectric mirrors are often used. These are glass (or sometimes other material) substrates on which one or more layers of dielectric material are deposited, to form an optical coating. By careful choice of the type and thickness of the dielectric layers, the range of wavelengths and amount of light reflected from the mirror can be specified. The best mirrors of this type can reflect >99.999% of the light (in a narrow range of wavelengths) which is incident on the mirror. Such mirrors are often used in lasers. In astronomy, adaptive optics is a technique to measure variable image distortions and adapt a deformable mirror accordingly on a timescale of milliseconds, to compensate for the distortions. Although most mirrors are designed to reflect visible light, surfaces reflecting other forms of electromagnetic radiation are also called "mirrors". The mirrors for other ranges of electromagnetic waves are used in optics and astronomy. Mirrors for radio waves (sometimes known as reflectors) are important elements of radio telescopes. Simple periscopes use mirrors. Face-to-face mirrors Two or more mirrors aligned exactly parallel and facing each other can give an infinite regress of reflections, called an infinity mirror effect. Some devices use this to generate multiple reflections: Fabry–Pérot interferometer Laser (which contains an optical cavity) 3D kaleidoscope to concentrate light momentum-enhanced solar sail Military applications Tradition states that Archimedes used a large array of mirrors to burn Roman ships during an attack on Syracuse. This has never been proven or disproved. On the TV show MythBusters, a team from MIT tried to recreate the famous "Archimedes Death Ray". They were unsuccessful at starting a fire on a ship. Previous attempts to set a boat on fire using only the bronze mirrors available in Archimedes' time were unsuccessful, and the time taken to ignite the craft would have made its use impractical, resulting in the MythBusters team deeming the myth "busted". It was however found that the mirrors made it very difficult for the passengers of the targeted boat to see; such a scenario could have impeded attackers and have provided the origin of the legend. (See solar power tower for a practical use of this technique.) Periscopes were used to great effect in war, especially during the World Wars where they were used to peer over the parapet of trenches to ensure that the soldier using the periscope could see safely without the risk of incoming direct fire from other small arms. Seasonal lighting Due to its location in a steep-sided valley, the Italian town of Viganella gets no direct sunlight for seven weeks each winter. In 2006 a €100,000 computer-controlled mirror, 8×5 m, was installed to reflect sunlight into the town's piazza. In early 2007 the similarly situated village of Bondo, Switzerland, was considering applying this solution as well. In 2013, mirrors were installed to reflect sunlight into the town square in the Norwegian town of Rjukan. Mirrors can be used to produce enhanced lighting effects in greenhouses or conservatories. Architecture Mirrors are a popular design-theme in architecture, particularly with late modern and post-modernist high-rise buildings in major cities. Early examples include the Campbell Center in Dallas, which opened in 1972, and the John Hancock Tower (completed in 1976) in Boston. More recently, two skyscrapers designed by architect Rafael Viñoly, the Vdara in Las Vegas and 20 Fenchurch Street in London, have experienced unusual problems due to their concave curved-glass exteriors acting as respectively cylindrical and spherical reflectors for sunlight. In 2010, the Las Vegas Review Journal reported that sunlight reflected off the Vdara's south-facing tower could singe swimmers in the hotel pool, as well as melting plastic cups and shopping bags; employees of the hotel referred to the phenomenon as the "Vdara death ray", aka the "fryscraper." In 2013, sunlight reflecting off 20 Fenchurch Street melted parts of a Jaguar car parked nearby and scorching or igniting the carpet of a nearby barber-shop. This building had been nicknamed the "walkie-talkie" because its shape was supposedly similar to a certain model of two-way radio; but after its tendency to overheat surrounding objects became known, the nickname changed to the "walkie-scorchie". Fine art Paintings Painters depicting someone gazing into a mirror often also show the person's reflection. This is a kind of abstraction—in most cases the angle of view is such that the person's reflection should not be visible. Similarly, in movies and still photography an actor or actress is often shown ostensibly looking at him- or herself in a mirror, and yet the reflection faces the camera. In reality, the actor or actress sees only the camera and its operator in this case, not their own reflection. In the psychology of perception, this is known as the Venus effect. The mirror is the central device in some of the greatest of European paintings: Édouard Manet's A Bar at the Folies-Bergère (1882) Titian's Venus with a Mirror Jan van Eyck's Arnolfini Portrait Pablo Picasso's Girl before a Mirror (1932) Diego Velázquez's Rokeby Venus Diego Velázquez's Las Meninas (wherein the viewer is both the watcher - of a self-portrait in progress - and the watched) and the many adaptations of that painting in various media Veronese's Venus with a Mirror Artists have used mirrors to create works and to hone their craft: Filippo Brunelleschi discovered linear perspective with the help of the mirror. Leonardo da Vinci called the mirror the "master of painters". He recommended, "When you wish to see whether your whole picture accords with what you have portrayed from nature take a mirror and reflect the actual object in it. Compare what is reflected with your painting and carefully consider whether both likenesses of the subject correspond, particularly in regard to the mirror." Many self-portraits are made possible through the use of mirrors, such as great self-portraits by Dürer, Frida Kahlo, Rembrandt, and Van Gogh. M. C. Escher used special shapes of mirrors in order to achieve a much more complete view of his surroundings than by direct observation in Hand with Reflecting Sphere (1935; also known as Self-Portrait in Spherical Mirror). Mirrors are sometimes necessary to fully appreciate art work: István Orosz's anamorphic works are images distorted such that they only become clearly visible when reflected in a suitably shaped and positioned mirror. Sculpture Anamorphosis projecting sculpture into mirrors Contemporary anamorphic artist Jonty Hurwitz uses cylindrical mirrors to project distorted sculptures. Sculptures comprised entirely or in part of mirrors include: Infinity Also Hurts, a mirror, glass and silicone sculpture by artist Seth Wulsin Sky Mirror, a public sculpture by artist Anish Kapoor Other artistic mediums Some other contemporary artists use mirrors as the material of art: A Chinese magic mirror is a device in which the face of the bronze mirror projects the same image that was cast on its back. This is due to minute curvatures on its front. Specular holography uses a large number of curved mirrors embedded in a surface to produce three-dimensional imagery. Paintings on mirror surfaces (such as silkscreen printed glass mirrors) Special mirror installations: Follow Me, a mirror labyrinth by artist Jeppe Hein (see also, Entertainment: Mirror mazes, below) Mirror Neon Cube by artist Jeppe Hein Religious function of the real and depicted mirror In the Middle Ages, mirrors existed in various shapes for multiple uses. Mostly they were used as an accessory for personal hygiene but also as tokens of courtly love, made from ivory in the ivory-carving centers in Paris, Cologne and the Southern Netherlands. They also had their uses in religious contexts as they were integrated in a special form of pilgrim badges or pewter/lead mirror boxes From the late 14th century. Burgundian ducal inventories show us that the dukes owned a mass of mirrors or objects with mirrors, not only with religious iconography or inscriptions, but combined with reliquaries, religious paintings or other objects that were distinctively used for personal piety. Considering mirrors in paintings and book illumination as depicted artifacts and trying to draw conclusions about their functions from their setting, one of these functions is to be an aid in personal prayer to achieve self-knowledge and knowledge of God, in accord with contemporary theological sources. For example, the famous Arnolfini Wedding by Jan van Eyck shows a constellation of objects that can be recognized as one which would allow a praying man to use them for his personal piety: the mirror surrounded by scenes of the Passion to reflect on it and on oneself, a rosary as a device in this process, the veiled and cushioned bench to use as a prie-dieu, and the abandoned shoes that point in the direction in which the praying man kneeled. The metaphorical meaning of depicted mirrors is complex and many-layered, e.g. as an attribute of Mary, the "speculum sine macula" (mirror without blemish), or as attributes of scholarly and theological wisdom and knowledge as they appear in book illuminations of different evangelists and authors of theological treatises. Depicted mirrors – orientated on the physical properties of a real mirror – can be seen as metaphors of knowledge and reflection and are thus able to remind beholders to reflect and get to know themselves. The mirror may function simultaneously as a symbol and as a device of a moral appeal. That is also the case if it is shown in combination with virtues and vices, a combination which also occurs more frequently in the 15th century: the moralizing layers of mirror metaphors remind the beholder to examine themself thoroughly according to their own virtuous or vicious life. This is all the more true if the mirror is combined with iconography of death. Not only is Death as a corpse or skeleton holding the mirror for the still-living personnel of paintings, illuminations and prints, but the skull appears on the convex surfaces of depicted mirrors, showing the painted and real beholders their future face. Decoration Mirrors are frequently used in interior decoration and as ornaments: Mirrors, typically large and unframed, are frequently used in interior decoration to create an illusion of space and to amplify the apparent size of a room. They come also framed in a variety of forms, such as the pier glass and the overmantel mirror. Mirrors are used also in some schools of feng shui, an ancient Chinese practice of placement and arrangement of space to achieve harmony with an environment. The softness of old mirrors is sometimes replicated by contemporary artisans for use in interior design. These reproduction antiqued mirrors are works of art and can bring color and texture to an otherwise hard, cold reflective surface. A decorative reflecting sphere of thin metal-coated glass, working as a reducing wide-angle mirror, is sold as a Christmas ornament called a bauble. Some pubs and bars hang mirrors depicting the logo of a brand of liquor, beer or drinking establishment. Entertainment Illuminated rotating disco balls covered with small mirrors are used to cast moving spots of light around a dance floor. The hall of mirrors, commonly found in amusement parks, is an attraction in which a number of distorting mirrors produce unusual reflections of the visitor. Mirrors are employed in kaleidoscopes, personal entertainment devices invented in Scotland by Sir David Brewster. Mirrors are often used in magic to create an illusion. One effect is called Pepper's ghost. Mirror mazes, often found in amusement parks, contain large numbers of mirrors and sheets of glass. The idea is to navigate the disorientating array without bumping into the walls. Mirrors in attractions like this are often made of Plexiglas to prevent breakages. Film and television Mirrors appear in many movies and TV shows: Black Swan is a psychological horror film that frequently incorporates mirrors. Fractured mirrors are prominent in the film, and the character Nina stabs herself with a broken piece of mirror. Candyman is a horror film about a malevolent spirit summoned by speaking its name in front of a mirror. Conan the Destroyer features a mirror-embedded chamber deep within Thoth-Amon's castle. The mirrors are first used in an illusory fashion to deceive Conan once he is separated by his companions, and during a battle sequence it is discovered that by breaking the mirrors he is able to damage and eventually defeat the otherwise-invulnerable wizard Thoth-Amon. Dead of Night is an anthology horror film with one segment titled "The Haunted Mirror," in which a mirror casts a murderous spell. Doctor Strange, Doctor Strange in the Multiverse of Madness, and Spider-Man: No Way Home feature the fictional mirror dimension, a parallel dimension in the Marvel Universe that reflects objects like a mirror, but in different directions. Enter the Dragon'''s iconic and final fight scene occurs in a mirrored room. The mirrors create multiple reflections of the fight movements but are eventually smashed.The Floorwalker and Duck Soup contain a mirror scene in which one person comically pretends to be the mirror reflection of someone else. This mirror scene has been imitated in other comedy films and TV shows.Hamlet has a throne room with mirrored walls. Hamlet, played by Kenneth Branagh, gives his famous speech with the words "to be or not to be," looking into these mirrors. Harry Potter and the Philosopher's Stone includes the magical Mirror of Erised. Inception contains mirrors created in a dream sequence. Ariadne creates two mirrors facing each other that form an infinite number of reflected mirrors. Lady in the Lake, a 1947 film noir, was shot from the point of view of the protagonist, who is seen only when a mirror is included in the shot.Last Night in Soho is a psychological horror movie with several mirror scenes. The character Ellie occasionally sees her mother's ghost in mirrors. The Matrix uses various reflections and mirrors throughout the film. Neo watches a broken mirror mend itself, and different objects create reflections. Mirror is a drama film by Andrei Tarkovsky that includes several scenes with mirrors and several scenes shot in reflection. Mirror Mirror is a fantasy comedy film based on Snow White that features a Mirror House and Mirror Queen. Mirrors is a horror film about haunted mirrors that reflect different scenes than those in front of them.Persona relies on mirror sequences to show how the two women, Bibi and Liv, reflect each other and become more alike. Poltergeist III features mirrors that do not reflect reality and which can be used as portals to an afterlife.Psycho by Alfred Hitchock has several shots with mirrors that reflect characters. Oculus is a horror film about a haunted mirror that causes people to hallucinate and commit acts of violence.Orpheus includes an important theme of mirrors in connection to aging and death. Sailor Moon in the fourth story arc has a major theme pertaining to mirrors, which entrap several of the Sailor Senshi, the fiancée of the protagonist, and the villain in the arc.Taxi Driver has a notable scene with a mirror in which the character Travis, played by Robert De Niro, asks himself the famous line, "You talkin' to me?"The Lady from Shanghai has a climatic hall of mirrors scene that has become a trope in cinema narratives.Raging Bull ends with the character Jake talking to himself in a mirror, a scene that was reused in Boogie Nights. The Shining is a horror movie that includes several scenes with mirrors. Every time the character Jack encounters a ghost, a mirror is present. The 10th Kingdom miniseries requires the characters to use a magic mirror to travel between New York City (the 10th Kingdom) and the Nine Kingdoms of fairy tale.The Twilight Zone episode "The Mirror" features a mirror that the character Clemente believes can provide visions and information about enemies.Us is a horror film that includes a girl seeing a doppelgänger of herself in a house of mirrors in a funhouse. The mirror images reflect the similarities in the clones throughout the film.Vertigo includes several appearances of mirrors with both Scottie and Madeleine in the frame. Literature Mirrors feature in literature: Christian Bible passages, 1 Corinthians 13:12 ("Through a Glass Darkly") and 2 Corinthians 3:18, reference a dim mirror-image or poor mirror-reflection. Narcissus of Greek mythology wastes away while gazing, self-admiringly, at his reflection in water. Elsewhere in Greek Mythology, Perseus is said to have defeated the Gorgon Medusa with the aid of a mirrored shield which allowed him to avoid the petrifying effect of her visage by only viewing her reflection. The Song dynasty history Zizhi Tongjian Comprehensive Mirror in Aid of Governance by Sima Guang is so titled because "mirror" (鑑, jiàn) is used metaphorically in Chinese to refer to gaining insight by reflecting on past experience or history. In the late 6th century Chinese folktale The Broken Mirror Restored two lovers who are separated by war break a mirror in two so that they might find each other again by identifying the other half of the mirror. The phrase "broken mirror restored", or "broken mirror joined together" has been used as an idiom to suggests the happy reunion of a separated couple. In the European fairy tale, Snow White (collected by the Brothers Grimm in 1812), the evil queen asks, "Mirror, mirror, on the wall... who's the fairest of them all?" In the Aarne-Thompson-Uther Index tale type ATU 329, "Hiding from the Devil (Princess)", the protagonist must find a way to hide from a princess, who, in many variants, owns a magical mirror that can see the whole world. In Tennyson's famous poem The Lady of Shalott (1833, revised in 1842), the titular character possesses a mirror that enables her to look out on the people of Camelot, as she is under a curse that prevents her from seeing Camelot directly. Hans Christian Andersen's fairy tale The Snow Queen, features the devil, in a form of an evil troll, who made a magic mirror that distorts the appearance of everything that it reflects. Lewis Carroll's Through the Looking-Glass and What Alice Found There (1871) has become one of the best-loved exemplars of the use of mirrors in literature. The text itself utilizes a narrative that mirrors that of its predecessor, Alice's Adventures in Wonderland. In Oscar Wilde's novel, The Picture of Dorian Gray (1890), a portrait serves as a magical mirror that reflects the true visage of the perpetually youthful protagonist, as well as the effect on his soul of each sinful act. W. H. Auden's villanelle "Miranda" repeats the refrain: "My dear one is mine as mirrors are lonely". The short story Tlön, Uqbar, Orbis Tertius (1940) by Jorge Luis Borges begins with the phrase "I owe the discovery of Uqbar to the conjunction of a mirror and an encyclopedia" and contains other references to mirrors. The Trap, a short story by H.P. Lovecraft and Henry S. Whitehead, centers around a mirror. "It was on a certain Thursday morning in December that the whole thing began with that unaccountable motion I thought I saw in my antique Copenhagen mirror. Something, it seemed to me, stirred—something reflected in the glass, though I was alone in my quarters." Magical objects in the Harry Potter series (1997–2011) include the Mirror of Erised and two-way mirrors. Under Appendix: Variant Planes & Cosmologies of the Dungeons & Dragons Manual of the Planes (2000), is The Plane of Mirrors (page 204). It describes the Plane of Mirrors as a space existing behind reflective surfaces, and experienced by visitors as a long corridor. The greatest danger to visitors upon entering the plane is the instant creation of a mirror-self with the opposite alignment of the original visitor. The Mirror Thief, a novel by Martin Seay (2016), includes a fictional account of industrial espionage surrounding mirror-manufacturing in 16th-century Venice. The Glass Floor, a short story by Stephen King, concerns a mysterious and deadly mirrored floor. The Reaper's Image, a short story by Stephen King, concerns a rare Elizabethan mirror that displays the Reaper's image when viewed, which symbolises the death of the viewer. Kilgore Trout, a protagonist of Kurt Vonnegut's novel Breakfast of Champions, believes that mirrors are windows to other universes, and refers to them as "leaks", a recurring motif in the book. In The Fellowship of the Ring by J. R. R. Tolkien, the Mirror of Galadriel allows one to see things of the past, present and possible future. The mirror additionally appears in the movie adaptation. Mirror test Only a few animal species have been shown to have the ability to recognize themselves in a mirror, most of them mammals. Experiments have found that the following animals can pass the mirror test: Humans. Humans tend to fail the mirror test until they are about 18 months old, or what psychoanalysts call the "mirror stage". All great apes: Bonobos Chimpanzees Orangutans Gorillas. Initially, it was thought that gorillas did not pass the test, but there are now several well-documented reports of gorillas (such as Koko) passing the test. Bottlenose dolphins Orcas Elephants European magpies
Technology
Optics
null
20560
https://en.wikipedia.org/wiki/Macro%20%28computer%20science%29
Macro (computer science)
In computer programming, a macro (short for "macro instruction"; ) is a rule or pattern that specifies how a certain input should be mapped to a replacement output. Applying a macro to an input is known as macro expansion. The input and output may be a sequence of lexical tokens or characters, or a syntax tree. Character macros are supported in software applications to make it easy to invoke common command sequences. Token and tree macros are supported in some programming languages to enable code reuse or to extend the language, sometimes for domain-specific languages. Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. Thus, they are called "macros" because a "big" block of code can be expanded from a "small" sequence of characters. Macros often allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors. The term derives from "macro instruction", and such expansions were originally used in generating assembly language code. Keyboard and mouse macros Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other, usually more time-consuming, sequences of keystrokes and mouse actions. In this way, frequently used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders. During the 1980s, macro programs – originally SmartKey, then SuperKey, KeyWorks, Prokey – were very popular, first as a means to automatically format screenplays, then for a variety of user-input tasks. These programs were based on the terminate-and-stay-resident mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interfaces and the availability of keyboard and mouse macros in applications, such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros. Keyboard macros can be used in massively multiplayer online role-playing games (MMORPGs) to perform repetitive, but lucrative tasks, thus accumulating resources. As this is done without human effort, it can skew the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, and their administrators spend considerable effort to suppress them. Application macros and scripting Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros. They are created by carrying out the sequence once and letting the application record the actions. An underlying macro programming language, most commonly a scripting language, with direct access to the features of the application may also exist. The programmers' text editor Emacs (short for "editing macros") follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was originally devised as a set of macros in the editing language TECO; it was later ported to dialects of Lisp. Another programmers' text editor, Vim (a descendant of vi), also has an implementation of keyboard macros. It can record into a register (macro) what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Vim also has a scripting language called Vimscript to create macros. Visual Basic for Applications (VBA) is a programming language included in Microsoft Office from Office 97 through Office 2019 (although it was available in some components of Office prior to Office 97). However, its function has evolved from and replaced the macro languages that were originally included in some of these applications. XEDIT, running on the Conversational Monitor System (CMS) component of VM, supports macros written in EXEC, EXEC2 and REXX, and some CMS commands were actually wrappers around XEDIT macros. The Hessling Editor (THE), a partial clone of XEDIT, supports Rexx macros using Regina and Open Object REXX (oorexx). Many common applications, and some on PCs, use Rexx as a scripting language. Macro virus VBA has access to most Microsoft Windows system calls and executes when documents are opened. This makes it relatively easy to write computer viruses in VBA, commonly known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990s and to date, Microsoft has been patching and updating its programs. In addition, current anti-virus programs immediately counteract such attacks. Parameterized and parameterless macro A parameterized macro is a macro that is able to insert given objects into its expansion. This gives the macro some of the power of a function. As a simple example, in the C programming language, this is a typical macro that is not a parameterized macro, i.e., a parameterless macro: #define PI 3.14159 This causes PI to always be replaced with 3.14159 wherever it occurs. An example of a parameterized macro, on the other hand, is this: #define pred(x) ((x)-1) What this macro expands to depends on what argument x is passed to it. Here are some possible expansions: pred(2) → ((2) -1) pred(y+2) → ((y+2) -1) pred(f(5)) → ((f(5))-1) Parameterized macros are a useful source-level mechanism for performing in-line expansion, but in languages such as C where they use simple textual substitution, they have a number of severe disadvantages over other mechanisms for performing in-line expansion, such as inline functions. The parameterized macros used in languages such as Lisp, PL/I and Scheme, on the other hand, are much more powerful, able to make decisions about what code to produce based on their arguments; thus, they can effectively be used to perform run-time code generation. Text-substitution macros Languages such as C and some assembly languages have rudimentary macro systems, implemented as preprocessors to the compiler or assembler. C preprocessor macros work by simple textual substitution at the token, rather than the character level. However, the macro facilities of more sophisticated assemblers, e.g., IBM High Level Assembler (HLASM) can't be implemented with a preprocessor; the code for assembling instructions and data is interspersed with the code for assembling macro invocations. A classic use of macros is in the computer typesetting system TeX and its derivatives, where most functionality is based on macros. MacroML is an experimental system that seeks to reconcile static typing and macro systems. Nemerle has typed syntax macros, and one productive way to think of these syntax macros is as a multi-stage computation. Other examples: m4 is a sophisticated stand-alone macro processor. TRAC Macro Extension TAL, accompanying Template Attribute Language SMX: for web pages ML/1 (Macro Language One) troff and nroff: for typesetting and formatting Unix manpages. CMS EXEC: for command-line macros and application macros EXEC 2 in Conversational Monitor System (CMS): for command-line macros and application macros CLIST in IBM's Time Sharing Option (TSO): for command-line macros and application macros REXX: for command-line macros and application macros in, e.g., AmigaOS, CMS, OS/2, TSO SCRIPT: for formatting documents Various shells for, e.g., Linux Some major applications have been written as text macro invoked by other applications, e.g., by XEDIT in CMS. Embeddable languages Some languages, such as PHP, can be embedded in free-format text, or the source code of other languages. The mechanism by which the code fragments are recognised (for instance, being bracketed by <?php and ?>) is similar to a textual macro language, but they are much more powerful, fully featured languages. Procedural macros Macros in the PL/I language are written in a subset of PL/I itself: the compiler executes "preprocessor statements" at compilation time, and the output of this execution forms part of the code that is compiled. The ability to use a familiar procedural language as the macro language gives power much greater than that of text substitution macros, at the expense of a larger and slower compiler. Macros in PL/I, as well as in many assemblers, may have side effects, e.g., setting variables that other macros can access. Frame technology's frame macros have their own command syntax but can also contain text in any language. Each frame is both a generic component in a hierarchy of nested subassemblies, and a procedure for integrating itself with its subassembly frames (a recursive process that resolves integration conflicts in favor of higher level subassemblies). The outputs are custom documents, typically compilable source modules. Frame technology can avoid the proliferation of similar but subtly different components, an issue that has plagued software development since the invention of macros and subroutines. Most assembly languages have less powerful procedural macro facilities, for example allowing a block of code to be repeated N times for loop unrolling; but these have a completely different syntax from the actual assembly language. Syntactic macros Macro systems—such as the C preprocessor described earlier—that work at the level of lexical tokens cannot preserve the lexical structure reliably. Syntactic macro systems work instead at the level of abstract syntax trees, and preserve the lexical structure of the original program. The most widely used implementations of syntactic macro systems are found in Lisp-like languages. These languages are especially suited for this style of macro due to their uniform, parenthesized syntax (known as S-expressions). In particular, uniform syntax makes it easier to determine the invocations of macros. Lisp macros transform the program structure itself, with the full language available to express such transformations. While syntactic macros are often found in Lisp-like languages, they are also available in other languages such as Prolog, Erlang, Dylan, Scala, Nemerle, Rust, Elixir, Nim, Haxe, and Julia. They are also available as third-party extensions to JavaScript and C#. Early Lisp macros Before Lisp had macros, it had so-called FEXPRs, function-like operators whose inputs were not the values computed by the arguments but rather the syntactic forms of the arguments, and whose output were values to be used in the computation. In other words, FEXPRs were implemented at the same level as EVAL, and provided a window into the meta-evaluation layer. This was generally found to be a difficult model to reason about effectively. In 1963, Timothy Hart proposed adding macros to Lisp 1.5 in AI Memo 57: MACRO Definitions for LISP. Anaphoric macros An anaphoric macro is a type of programming macro that deliberately captures some form supplied to the macro which may be referred to by an anaphor (an expression referring to another). Anaphoric macros first appeared in Paul Graham's On Lisp and their name is a reference to linguistic anaphora—the use of words as a substitute for preceding words. Hygienic macros In the mid-eighties, a number of papers introduced the notion of hygienic macro expansion (syntax-rules), a pattern-based system where the syntactic environments of the macro definition and the macro use are distinct, allowing macro definers and users not to worry about inadvertent variable capture (cf. referential transparency). Hygienic macros have been standardized for Scheme in the R5RS, R6RS, and R7RS standards. A number of competing implementations of hygienic macros exist such as syntax-rules, syntax-case, explicit renaming, and syntactic closures. Both syntax-rules and syntax-case have been standardized in the Scheme standards. Recently, Racket has combined the notions of hygienic macros with a "tower of evaluators", so that the syntactic expansion time of one macro system is the ordinary runtime of another block of code, and showed how to apply interleaved expansion and parsing in a non-parenthesized language. A number of languages other than Scheme either implement hygienic macros or implement partially hygienic systems. Examples include Scala, Rust, Elixir, Julia, Dylan, Nim, and Nemerle. Applications Evaluation order Macro systems have a range of uses. Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros. For example, Scheme has both continuations and hygienic macros, which enables a programmer to design their own control abstractions, such as looping and early exit constructs, without the need to build them into the language. Data sub-languages and domain-specific languages Next, macros make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient. Binding constructs Macros can also be used to introduce new binding constructs. The most well-known example is the transformation of let into the application of a function to a set of arguments. Felleisen conjectures that these three categories make up the primary legitimate uses of macros in such a system. Others have proposed alternative uses of macros, such as anaphoric macros in macro systems that are unhygienic or allow selective unhygienic transformation. The interaction of macros and other language features has been a productive area of research. For example, components and modules are useful for large-scale programming, but the interaction of macros and these other constructs must be defined for their use together. Module and component-systems that can interact with macros have been proposed for Scheme and other languages with macros. For example, the Racket language extends the notion of a macro system to a syntactic tower, where macros can be written in languages including macros, using hygiene to ensure that syntactic layers are distinct and allowing modules to export macros to other modules. Macros for machine-independent software Macros are normally used to map a short string (macro invocation) to a longer sequence of instructions. Another, less common, use of macros is to do the reverse: to map a sequence of instructions to a macro string. This was the approach taken by the STAGE2 Mobile Programming System, which used a rudimentary macro compiler (called SIMCMP) to map the specific instruction set of a given computer into machine-independent macros. Applications (notably compilers) written in these machine-independent macros can then be run without change on any computer equipped with the rudimentary macro compiler. The first application run in such a context is a more sophisticated and powerful macro compiler, written in the machine-independent macro language. This macro compiler is applied to itself, in a bootstrap fashion, to produce a compiled and much more efficient version of itself. The advantage of this approach is that complex applications can be ported from one computer to a very different computer with very little effort (for each target machine architecture, just the writing of the rudimentary macro compiler). The advent of modern programming languages, notably C, for which compilers are available on virtually all computers, has rendered such an approach superfluous. This was, however, one of the first instances (if not the first) of compiler bootstrapping. Assembly language While macro instructions can be defined by a programmer for any set of native assembler program instructions, typically macros are associated with macro libraries delivered with the operating system allowing access to operating system functions such as peripheral access by access methods (including macros such as OPEN, CLOSE, READ and WRITE) operating system functions such as ATTACH, WAIT and POST for subtask creation and synchronization. Typically such macros expand into executable code, e.g., for the EXIT macroinstruction, a list of define constant instructions, e.g., for the DCB macro—DTF (Define The File) for DOS—or a combination of code and constants, with the details of the expansion depending on the parameters of the macro instruction (such as a reference to a file and a data area for a READ instruction); the executable code often terminated in either a branch and link register instruction to call a routine, or a supervisor call instruction to call an operating system function directly. Generating a Stage 2 job stream for system generation in, e.g., OS/360. Unlike typical macros, sysgen stage 1 macros do not generate data or code to be loaded into storage, but rather use the PUNCH statement to output JCL and associated data. In older operating systems such as those used on IBM mainframes, full operating system functionality was only available to assembler language programs, not to high level language programs (unless assembly language subroutines were used, of course), as the standard macro instructions did not always have counterparts in routines available to high-level languages. History In the mid-1950s, when assembly language programming was the main way to program a computer, macro instruction features were developed to reduce source code (by generating multiple assembly statements from each macro instruction) and to enforce coding conventions (e.g. specifying input/output commands in standard ways). A macro instruction embedded in the otherwise assembly source code would be processed by a macro compiler, a preprocessor to the assembler, to replace the macro with one or more assembly instructions. The resulting code, pure assembly, would be translated to machine code by the assembler. Two of the earliest programming installations to develop macro languages for the IBM 705 computer were at Dow Chemical Corp. in Delaware and the Air Material Command, Ballistics Missile Logistics Office in California. Some consider macro instructions as an intermediate step between assembly language programming and the high-level programming languages that followed, such as FORTRAN and COBOL. By the late 1950s the macro language was followed by the Macro Assemblers. This was a combination of both where one program served both functions, that of a macro pre-processor and an assembler in the same package. Early examples are FORTRAN Assembly Program (FAP) and Macro Assembly Program (IBMAP) on the IBM 709, 7094, 7040 and 7044, and Autocoder on the 7070/7072/7074. In 1959, Douglas E. Eastwood and Douglas McIlroy of Bell Labs introduced conditional and recursive macros into the popular SAP assembler, creating what is known as Macro SAP. McIlroy's 1960 paper was seminal in the area of extending any (including high-level) programming languages through macro processors. Macro Assemblers allowed assembly language programmers to implement their own macro-language and allowed limited portability of code between two machines running the same CPU but different operating systems, for example, early versions of MS-DOS and CP/M-86. The macro library would need to be written for each target machine but not the overall assembly language program. Note that more powerful macro assemblers allowed use of conditional assembly constructs in macro instructions that could generate different code on different machines or different operating systems, reducing the need for multiple libraries. In the 1980s and early 1990s, desktop PCs were only running at a few MHz and assembly language routines were commonly used to speed up programs written in C, Fortran, Pascal and others. These languages, at the time, used different calling conventions. Macros could be used to interface routines written in assembly language to the front end of applications written in almost any language. Again, the basic assembly language code remained the same, only the macro libraries needed to be written for each target language. In modern operating systems such as Unix and its derivatives, operating system access is provided through subroutines, usually provided by dynamic libraries. High-level languages such as C offer comprehensive access to operating system functions, obviating the need for assembler language programs for such functionality. Moreover, standard libraries of several newer programming languages, such as Go, actively discourage the use of syscalls in favor of platform-agnostic libraries as well if not necessary, to improve portability and security.
Technology
Software development: General
null
20580
https://en.wikipedia.org/wiki/Motion
Motion
In physics, motion is when an object changes its position with respect to a reference point in a given time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and frame of reference to an observer, measuring the change in position of the body relative to that frame with a change in time. The branch of physics describing the motion of objects without reference to their cause is called kinematics, while the branch studying forces and their effect on motion is called dynamics. If an object is not in motion relative to a given frame of reference, it is said to be at rest, motionless, immobile, stationary, or to have a constant or time-invariant position with reference to its surroundings. Modern physics holds that, as there is no absolute frame of reference, Newton's concept of absolute motion cannot be determined. Everything in the universe can be considered to be in motion. Motion applies to various physical systems: objects, bodies, matter particles, matter fields, radiation, radiation fields, radiation particles, curvature, and space-time. One can also speak of the motion of images, shapes, and boundaries. In general, the term motion signifies a continuous change in the position or configuration of a physical system in space. For example, one can talk about the motion of a wave or the motion of a quantum particle, where the configuration consists of the probabilities of the wave or particle occupying specific positions. Equations of motion Laws of motion In physics, the motion of bodies is described through two related sets of laws of mechanics. Classical mechanics for super atomic (larger than an atom) objects (such as cars, projectiles, planets, cells, and humans) and quantum mechanics for atomic and sub-atomic objects (such as helium, protons, and electrons). Historically, Newton and Euler formulated three laws of classical mechanics: Classical mechanics Classical mechanics is used for describing the motion of macroscopic objects moving at speeds significantly slower than the speed of light, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains and is one of the oldest and largest scientific descriptions in science, engineering, and technology. Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, which was first published on July 5, 1687. Newton's three laws are: A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force. (This is known as the law of inertia.) Force () is equal to the change in momentum per change in time (). For a constant mass, force equals mass times acceleration ( ). For every action, there is an equal and opposite reaction. (In other words, whenever one body exerts a force onto a second body, (in some cases, which is standing still) the second body exerts the force back onto the first body. and are equal in magnitude and opposite in direction. So, the body that exerts will be pushed backward.) Newton's three laws of motion were the first to accurately provide a mathematical model for understanding orbiting bodies in outer space. This explanation unified the motion of celestial bodies and the motion of objects on Earth. Relativistic mechanics Modern kinematics developed with study of electromagnetism and refers all velocities to their ratio to speed of light . Velocity is then interpreted as rapidity, the hyperbolic angle for which the hyperbolic tangent function . Acceleration, the change of velocity over time, then changes rapidity according to Lorentz transformations. This part of mechanics is special relativity. Efforts to incorporate gravity into relativistic mechanics were made by W. K. Clifford and Albert Einstein. The development used differential geometry to describe a curved universe with gravity; the study is called general relativity. Quantum mechanics Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic particles (electrons, protons, neutrons, and even smaller elementary particles such as quarks). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality. In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity. In quantum mechanics, due to the Heisenberg uncertainty principle, the complete state of a subatomic particle, such as its location and velocity, cannot be simultaneously determined. In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomena such as superfluidity, superconductivity, and biological systems, including the function of smell receptors and the structures of protein. Orders of magnitude Humans, like all known things in the universe, are in constant motion; however, aside from obvious movements of the various external body parts and locomotion, humans are in motion in a variety of ways that are more difficult to perceive. Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation. The larger scales of imperceptible motions are difficult for humans to perceive for two reasons: Newton's laws of motion (particularly the third), which prevents the feeling of motion on a mass to which the observer is connected, and the lack of an obvious frame of reference that would allow individuals to easily see that they are moving. The smaller scales of these motions are too small to be detected conventionally with human senses. Universe Spacetime (the fabric of the universe) is expanding, meaning everything in the universe is stretching, like a rubber band. This motion is the most obscure as it is not physical motion, but rather a change in the very nature of the universe. The primary source of verification of this expansion was provided by Edwin Hubble who demonstrated that all galaxies and distant astronomical objects were moving away from Earth, known as Hubble's law, predicted by a universal expansion. Galaxy The Milky Way Galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately relative to the observed locations of other nearby galaxies. Another reference frame is provided by the Cosmic microwave background. This frame of reference indicates that the Milky Way is moving at around . Sun and Solar System The Milky Way is rotating around its dense Galactic Center, thus the Sun is moving in a circle within the galaxy's gravity. Away from the central bulge, or outer rim, the typical stellar velocity is between . All planets and their moons move with the Sun. Thus, the Solar System is in motion. Earth The Earth is rotating or spinning around its axis. This is evidenced by day and night, at the equator the earth has an eastward velocity of . The Earth is also orbiting around the Sun in an orbital revolution. A complete orbit around the Sun takes one year, or about 365 days; it averages a speed of about . Continents The Theory of Plate tectonics tells us that the continents are drifting on convection currents within the mantle, causing them to move across the surface of the planet at the slow speed of approximately per year. However, the velocities of plates range widely. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of per year and the Pacific Plate moving per year. At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a typical rate of about per year. Internal body The human heart is regularly contracting to move blood throughout the body. Through larger veins and arteries in the body, blood has been found to travel at approximately 0.33 m/s. Though considerable variation exists, and peak flows in the venae cavae have been found between . additionally, the smooth muscles of hollow internal organs are moving. The most familiar would be the occurrence of peristalsis, which is where digested food is forced throughout the digestive tract. Though different foods travel through the body at different rates, an average speed through the human small intestine is . The human lymphatic system is also constantly causing movements of excess fluids, lipids, and immune system related products around the body. The lymph fluid has been found to move through a lymph capillary of the skin at approximately 0.0000097 m/s. Cells The cells of the human body have many structures and organelles that move throughout them. Cytoplasmic streaming is a way in which cells move molecular substances throughout the cytoplasm, various motor proteins work as molecular motors within a cell and move along the surface of various cellular substrates such as microtubules, and motor proteins are typically powered by the hydrolysis of adenosine triphosphate (ATP), and convert chemical energy into mechanical work. Vesicles propelled by motor proteins have been found to have a velocity of approximately 0.00000152 m/s. Particles According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms that make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as a feeling of cold. Subatomic particles Within the standard atomic orbital model, electrons exist in a region around the nucleus of each atom. This region is called the electron cloud. According to Bohr's model of the atom, electrons have a high velocity, and the larger the nucleus they are orbiting the faster they would need to move. If electrons were to move about the electron cloud in strict paths the same way planets orbit the Sun, then electrons would be required to do so at speeds that would far exceed the speed of light. However, there is no reason that one must confine oneself to this strict conceptualization (that electrons move in paths the same way macroscopic objects do), rather one can conceptualize electrons to be 'particles' that capriciously exist within the bounds of the electron cloud. Inside the atomic nucleus, the protons and neutrons are also probably moving around due to the electrical repulsion of the protons and the presence of angular momentum of both particles. Light Light moves at a speed of 299,792,458 m/s, or , in a vacuum. The speed of light in vacuum (or ) is also the speed of all massless particles and associated fields in a vacuum, and it is the upper limit on the speed at which energy, matter, information or causation can travel. The speed of light in vacuum is thus the upper limit for speed for all physical systems. In addition, the speed of light is an invariant quantity: it has the same value, irrespective of the position or speed of the observer. This property makes the speed of light c a natural measurement unit for speed and a fundamental constant of nature. In 2019, the speed of light was redefined alongside all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. A new, but completely equivalent, wording of the metre's definition was proposed: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly when it is expressed in the SI unit ." This implicit change to the speed of light was one of the changes that was incorporated in the 2019 revision of the SI, also termed the New SI. Superluminal motion Some motion appears to an observer to exceed the speed of light. Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light. All of these sources are thought to contain a black hole, responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion. This occurs owing to how motion is often calculated at long distances; oftentimes calculations fail to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above naive calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed. Types of motion Simple harmonic motion – motion in which the body oscillates in such a way that the restoring force acting on it is directly proportional to the body's displacement. Mathematically Force is directly proportional to the negative of displacement. Negative sign signifies the restoring nature of the force. (e.g., that of a pendulum). Linear motion – motion that follows a straight linear path, and whose displacement is exactly the same as its trajectory. [Also known as rectilinear motion] Reciprocal motion Brownian motion – the random movement of very small particles Circular motion Rotatory motion – a motion about a fixed point. (e.g. Ferris wheel). Curvilinear motion – It is defined as the motion along a curved path that may be planar or in three dimensions. Rolling motion – (as of the wheel of a bicycle) Oscillatory – (swinging from side to side) Vibratory motion Combination (or simultaneous) motions – Combination of two or more above listed motions Projectile motion – uniform horizontal motion + vertical accelerated motion Fundamental motions Linear motion Circular motion Oscillation Wave Relative motion Rotary motion
Physical sciences
Physics
null
20590
https://en.wikipedia.org/wiki/Mathematical%20model
Mathematical model
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right. The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior. Elements of a mathematical model Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements: Governing equations Supplementary sub-models Defining equations Constitutive equations Assumptions and constraints Initial and boundary conditions Classical constraints and kinematic equations Classifications Mathematical models are of different types: Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity. Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations. Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties. Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge. Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions. Deductive, inductive, or floating. A is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model. Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players. Construction In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables). Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables. A priori information Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take. Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model. In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque. Subjective information Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data. An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability. Complexity In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification. For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before. Training, tuning, and fitting Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting. Evaluation and assessment A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation. Prediction of empirical data Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics. Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form. Scope of the model Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation. As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics. Philosophical considerations Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied. An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation. Significance in the natural sciences Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used. It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis. Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean. Some applications Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations. A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables. Examples One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s: where and is defined by the following state-transition table: {| border="1" | || || |- |S1 || || |- |S''2 || || |} The state represents that there has been an even number of 0s in the input so far, while signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, will finish in state an accepting state, so the input string will be accepted. The language recognized by is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1". Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel. Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning. Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions. Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function and the trajectory, that is a function is the solution of the differential equation: that can be written also as Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion. Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of commodities labeled each with a market price The consumer is assumed to have an ordinal utility function (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities consumed. The model further assumes that the consumer has a budget which is used to purchase a vector in such a way as to maximize The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: subject to: This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria. Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network. In computer science, mathematical models may be used to simulate computer networks. In mechanics, mathematical models may be used to analyze the movement of a rocket model.
Mathematics
Basics
null
20608
https://en.wikipedia.org/wiki/Messier%20object
Messier object
The Messier objects are a set of 110 astronomical objects catalogued by the French astronomer Charles Messier in his (Catalogue of Nebulae and Star Clusters). Because Messier was interested only in finding comets, he created a list of those non-comet objects that frustrated his hunt for them. This list, which Messier created in collaboration with his assistant Pierre Méchain, is now known as the Messier catalogue. The Messier catalogue is one of the most famous lists of astronomical objects, and many objects on the list are still referenced by their Messier numbers. The catalogue includes most of the astronomical deep-sky objects that can be easily observed from Earth's Northern Hemisphere; many Messier objects are popular targets for amateur astronomers. A preliminary version of the catalogue first appeared in 1774 in the Memoirs of the French Academy of Sciences for the year 1771. The first version of Messier's catalogue contained 45 objects, which were not numbered. Eighteen of the objects were discovered by Messier; the rest had been previously observed by other astronomers. By 1780 the catalogue had increased to 70 objects. The final version of the catalogue containing 103 objects was published in 1781 in the Connaissance des Temps for the year 1784. However, due to what was thought for a long time to be the incorrect addition of Messier 102, the total number remained 102. Other astronomers, using side notes in Messier's texts, eventually filled out the list up to 110 objects. The catalogue consists of a diverse range of astronomical objects, from star clusters and nebulae to galaxies. For example, Messier 1 is a supernova remnant, known as the Crab Nebula, and the great spiral Andromeda Galaxy is M31. Further inclusions followed; the first addition came from Nicolas Camille Flammarion in 1921, who added Messier 104 after finding Messier's side note in his 1781 edition exemplar of the catalogue. M105 to M107 were added by Helen Sawyer Hogg in 1947, M108 and M109 by Owen Gingerich in 1960, and M110 by Kenneth Glyn Jones in 1967. Lists and editions The first edition of 1774 covered 45 objects (M1 to M45). The total list published by Messier in 1781 contained 103 objects, but the list was expanded through successive additions by other astronomers, motivated by notes in Messier's and Méchain's texts indicating that at least one of them knew of the additional objects. The first such addition came from Nicolas Camille Flammarion in 1921, who added Messier 104 after finding a note Messier made in a copy of the 1781 edition of the catalogue. M105 to M107 were added by Helen Sawyer Hogg in 1947, M108 and M109 by Owen Gingerich in 1960, and M110 by Kenneth Glyn Jones in 1967. M102 was observed by Méchain, who communicated his notes to Messier. Méchain later concluded that this object was simply a re-observation of M101, though some sources suggest that the object Méchain observed was the galaxy NGC 5866 and identify that as M102. Messier's final catalogue was included in the [Knowledge of the Times for the Year 1784], the French official yearly publication of astronomical ephemerides. Messier lived and did his astronomical work at the Hôtel de Cluny (now the Musée national du Moyen Âge), in Paris, France. The list he compiled contains only objects found in the sky area he could observe: from the north celestial pole to a celestial latitude of about −35.7° . He did not observe or list objects visible only from farther south, such as the Large and Small Magellanic Clouds. Observations The Messier catalogue comprises nearly all of the most spectacular examples of the five types of deep-sky object – diffuse nebulae, planetary nebulae, open clusters, globular clusters, and galaxies – visible from European latitudes. Furthermore, almost all of the Messier objects are among the closest to Earth in their respective classes, which makes them heavily studied with professional class instruments that today can resolve very small and visually significant details in them. A summary of the astrophysics of each Messier object can be found in the Concise Catalog of Deep-sky Objects. Since these objects could be observed visually with the relatively small-aperture refracting telescope (approximately 100 mm ≈ 4 inches) used by Messier to study the sky from downtown Paris, they are among the brightest and thus most attractive astronomical objects (popularly called deep-sky objects) observable from Earth, and are popular targets for visual study and astrophotography available to modern amateur astronomers using larger aperture equipment. In early spring, astronomers sometimes gather for "Messier marathons", when all of the objects can be viewed over a single night. Messier objects Star chart of Messier objects
Physical sciences
Surveys and Catalogs
Astronomy
20613
https://en.wikipedia.org/wiki/Morphine
Morphine
Morphine, formerly also called morphia, is an opiate that is found naturally in opium, a dark brown resin produced by drying the latex of opium poppies (Papaver somniferum). It is mainly used as an analgesic (pain medication). There are numerous methods used to administer morphine: orally; administered under the tongue; via inhalation; injection into a vein, injection into a muscle, injection under the skin, or injection into the spinal cord area; transdermal; or via administered into the rectal canal suppository. It acts directly on the central nervous system (CNS) to induce analgesia and alter perception and emotional response to pain. Physical and psychological dependence and tolerance may develop with repeated administration. It can be taken for both acute pain and chronic pain and is frequently used for pain from myocardial infarction, kidney stones, and during labor. Its maximum effect is reached after about 20 minutes when administered intravenously and 60 minutes when administered by mouth, while the duration of its effect is 3–7 hours. Long-acting formulations of morphine are sold under the brand names MS Contin and Kadian, among others. Generic long-acting formulations are also available. Common side effects of morphine include drowsiness, euphoria, nausea, dizziness, sweating, and constipation. Potentially serious side effects of morphine include decreased respiratory effort, vomiting, and low blood pressure. Morphine is highly addictive and prone to abuse. If one's dose is reduced after long-term use, opioid withdrawal symptoms may occur. Caution is advised for the use of morphine during pregnancy or breastfeeding, as it may affect the health of the baby. Morphine was first isolated in 1804 by German pharmacist Friedrich Sertürner. This is believed to be the first isolation of a medicinal alkaloid from a plant. Merck began marketing it commercially in 1827. Morphine was more widely used after the invention of the hypodermic syringe in 18531855. Sertürner originally named the substance morphium, after the Greek god of dreams, Morpheus, as it has a tendency to cause sleep. The primary source of morphine is isolation from poppy straw of the opium poppy. In 2013, approximately 523 tons of morphine were produced. Approximately 45 tons were used directly for pain, an increase of 400% over the last twenty years. Most use for this purpose was in the developed world. About 70% of morphine is used to make other opioids such as hydromorphone, oxymorphone, and heroin. It is a Schedule II drug in the United States, Class A in the United Kingdom, and Schedule I in Canada. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 139th most commonly prescribed medication in the United States, with more than 4million prescriptions. It is available as a generic medication. Medical uses Pain Morphine is used primarily to treat both acute and chronic severe pain. Its duration of analgesia is about three to seven hours. Side effects of nausea and constipation are rarely severe enough to warrant stopping treatment. It is used for pain due to myocardial infarction and for labor pains. However, concerns exist that morphine may increase mortality in the event of non ST elevation myocardial infarction. Morphine has also traditionally been used in the treatment of acute pulmonary edema. However, a 2006 review found little evidence to support this practice. A 2016 Cochrane review concluded that morphine is effective in relieving cancer pain. Shortness of breath Morphine is beneficial in reducing the symptom of shortness of breath due to both cancer and non-cancer causes. In the setting of breathlessness at rest or on minimal exertion from conditions such as advanced cancer or end-stage cardiorespiratory diseases, regular, low-dose sustained-release morphine significantly reduces breathlessness safely, with its benefits maintained over time. Opioid use disorder Morphine is also available as a slow-release formulation for opiate substitution therapy (OST) in Austria, Germany, Bulgaria, Slovenia, and Canada for persons with opioid addiction who cannot tolerate either methadone or buprenorphine. Contraindications Relative contraindications to morphine include: respiratory depression when appropriate equipment is not available. Although it has previously been thought that morphine was contraindicated in acute pancreatitis, a review of the literature shows no evidence for this. Adverse effects Constipation Like loperamide and other opioids, morphine acts on the myenteric plexus in the intestinal tract, reducing gut motility, and causing constipation. The gastrointestinal effects of morphine are mediated primarily by μ-opioid receptors in the bowel. By inhibiting gastric emptying and reducing propulsive peristalsis of the intestine, morphine decreases the rate of intestinal transit. Reduction in gut secretion and increased intestinal fluid absorption also contribute to the constipating effect. Opioids also may act on the gut indirectly through tonic gut spasms after inhibition of nitric oxide generation. This effect was shown in animals when a nitric oxide precursor, L-arginine, reversed morphine-induced changes in gut motility. Hormone imbalance Clinical studies consistently conclude that morphine, like other opioids, often causes hypogonadism and hormone imbalances in chronic users of both sexes. This side effect is dose-dependent and occurs in both therapeutic and recreational users. Morphine can interfere with menstruation by suppressing levels of luteinizing hormone. Many studies suggest the majority (perhaps as many as 90%) of chronic opioid users have opioid-induced hypogonadism. This effect may cause the increased likelihood of osteoporosis and bone fracture observed in chronic morphine users. Studies suggest the effect is temporary. , the effect of low-dose or acute use of morphine on the endocrine system is unclear. Effects on human performance Most reviews conclude that opioids produce minimal impairment of human performance on tests of sensory, motor, or attentional abilities. However, recent studies have been able to show some impairments caused by morphine, which is not surprising, given that morphine is a central nervous system depressant. Morphine has resulted in impaired functioning on critical flicker frequency (a measure of overall CNS arousal) and impaired performance on the Maddox wing test (a measure of the deviation of the visual axes of the eyes). Few studies have investigated the effects of morphine on motor abilities; a high dose of morphine can impair finger tapping and the ability to maintain a low constant level of isometric force (i.e. fine motor control is impaired), though no studies have shown a correlation between morphine and gross motor abilities. In terms of cognitive abilities, one study has shown that morphine may negatively impact anterograde and retrograde memory, but these effects are minimal and transient. Overall, it seems that acute doses of opioids in non-tolerant subjects produce minor effects in some sensory and motor abilities, and perhaps also in attention and cognition. The effects of morphine will likely be more pronounced in opioid-naive subjects than in chronic opioid users. In chronic opioid users, such as those on Chronic Opioid Analgesic Therapy (COAT) for managing severe, chronic pain, behavioural testing has shown normal functioning on perception, cognition, coordination, and behaviour in most cases. One 2000 study analysed COAT patients to determine whether they were able to safely operate a motor vehicle. The findings from this study suggest that stable opioid use does not significantly impair abilities inherent in driving (this includes physical, cognitive, and perceptual skills). COAT patients showed rapid completion of tasks that require the speed of responding for successful performance (e.g., Rey Complex Figure Test) but made more errors than controls. COAT patients showed no deficits in visual-spatial perception and organization (as shown in the WAIS-R Block Design Test) but did show impaired immediate and short-term visual memory (as shown on the Rey Complex Figure Test – Recall). These patients showed no impairments in higher-order cognitive abilities (i.e., planning). COAT patients appeared to have difficulty following instructions and showed a propensity toward impulsive behaviour, yet this did not reach statistical significance. It is important to note that this study reveals that COAT patients have no domain-specific deficits, which supports the notion that chronic opioid use has minor effects on psychomotor, cognitive, or neuropsychological functioning. Reinforcement disorders Addiction Morphine is a highly addictive substance. Numerous studies, including one by The Lancet, ranked morphine/heroin as the #1 most addictive substance, followed by cocaine at #2, nicotine #3, barbiturates at #4, and ethanol at #5. In controlled studies comparing the physiological and subjective effects of heroin and morphine in individuals formerly addicted to opiates, subjects showed no preference for one drug over the other. Equipotent, injected doses had comparable action courses, with heroin crossing the blood–brain barrier slightly quicker. No difference in subjects' self-rated feelings of euphoria, ambition, nervousness, relaxation, or drowsiness. Short-term addiction studies by the same researchers demonstrated that tolerance developed at a similar rate to both heroin and morphine. When compared to the opioids hydromorphone, fentanyl, oxycodone, and pethidine, former addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine are particularly susceptible to abuse and addiction. Morphine and heroin also produced higher rates of euphoria and other positive subjective effects when compared to these other opioids. The choice of heroin and morphine over other opioids by former drug addicts may also be because heroin is an ester of morphine and morphine prodrug, essentially meaning they are identical drugs in vivo. Heroin is converted to morphine before binding to the opioid receptors in the brain and spinal cord, where morphine causes subjective effects, which is what the addicted individuals are seeking. Tolerance Several hypotheses are given about how tolerance develops, including opioid receptor phosphorylation (which would change the receptor conformation), functional decoupling of receptors from G-proteins (leading to receptor desensitization), μ-opioid receptor internalization or receptor down-regulation (reducing the number of available receptors for morphine to act on), and upregulation of the cAMP pathway (a counterregulatory mechanism to opioid effects) (For a review of these processes, see Koch and Hollt). Dependence and withdrawal Cessation of dosing with morphine creates the prototypical opioid withdrawal syndrome, which, unlike that of barbiturates, benzodiazepines, alcohol, or sedative-hypnotics, is not fatal by itself in otherwise healthy people. Acute morphine withdrawal, along with that of any other opioid, proceeds through a number of stages. Other opioids differ in the intensity and length of each, and weak opioids and mixed agonist-antagonists may have acute withdrawal syndromes that do not reach the highest level. As commonly cited, they are: Stage I, 6 h to 14 h after last dose: Drug craving, anxiety, irritability, perspiration, and mild to moderate dysphoria Stage II, 14 h to 18 h after last dose: Yawning, heavy perspiration, mild depression, lacrimation, crying, headaches, runny nose, dysphoria, also intensification of the above symptoms, "yen sleep" (a waking trance-like state) Stage III, 16 h to 24 h after last dose: Increase in all of the above, dilated pupils, piloerection (goose bumps), muscle twitches, hot flashes, cold flashes, aching bones and muscles, loss of appetite, and the beginning of intestinal cramping Stage IV, 24 h to 36 h after last dose: Increase in all of the above including severe cramping, restless legs syndrome (RLS), loose stool, insomnia, elevation of blood pressure, fever, increase in frequency of breathing and tidal volume, tachycardia (elevated pulse), restlessness, nausea Stage V, 36 h to 72 h after last dose: Increase in all of the above, fetal position, vomiting, free and frequent liquid diarrhea, weight loss of 2 kg to 5 kg per 24 h, increased white cell count, and other blood changes Stage VI, after completion of above: Recovery of appetite and normal bowel function, beginning of transition to post-acute withdrawal symptoms that are mainly psychological, but may also include increased sensitivity to pain, hypertension, colitis or other gastrointestinal afflictions related to motility, and problems with weight control in either direction In advanced stages of withdrawal, ultrasonographic evidence of pancreatitis has been demonstrated in some patients and is presumably attributed to spasm of the pancreatic sphincter of Oddi. The withdrawal symptoms associated with morphine addiction are usually experienced shortly before the time of the next scheduled dose, sometimes within as early as a few hours (usually 6 h to 12 h) after the last administration. Early symptoms include watery eyes, insomnia, diarrhea, runny nose, yawning, dysphoria, sweating, and, in some cases, a strong drug craving. Severe headache, restlessness, irritability, loss of appetite, body aches, severe abdominal pain, nausea and vomiting, tremors, and even stronger and more intense drug craving appear as the syndrome progresses. Severe depression and vomiting are very common. During the acute withdrawal period, systolic and diastolic blood pressures increase, usually beyond premorphine levels, and heart rate increases, which have potential to cause a heart attack, blood clot, or stroke. Chills or cold flashes with goose bumps alternating with flushing (hot flashes), kicking movements of the legs, and excessive sweating are also characteristic symptoms. Severe pains in the bones and muscles of the back and extremities occur, as do muscle spasms. At any point during this process, a suitable narcotic can be administered that will dramatically reverse the withdrawal symptoms. Major withdrawal symptoms peak between 48 h and 96 h after the last dose and subside after about 8 to 12 days. Sudden discontinuation of morphine by heavily dependent users who are in poor health is very rarely fatal. Morphine withdrawal is considered less dangerous than alcohol, barbiturate, or benzodiazepine withdrawal. The psychological dependence associated with morphine addiction is complex and protracted. Long after the physical need for morphine has passed, addicts will usually continue to think and talk about the use of morphine (or other drugs) and feel strange or overwhelmed coping with daily activities without being under the influence of morphine. Psychological withdrawal from morphine is usually a very long and painful process. Addicts often experience severe depression, anxiety, insomnia, mood swings, forgetfulness, low self-esteem, confusion, paranoia, and other psychological problems. Without intervention, the syndrome will run its course, and most of the overt physical symptoms will disappear within 7 to 10 days including psychological dependence. A high probability of relapse exists after morphine withdrawal when neither the physical environment nor the behavioral motivators that contributed to the abuse have been altered. Testimony of morphine's addictive and reinforcing nature is its relapse rate. Users of morphine have one of the highest relapse rates among all drug users, ranging up to 98% in the estimation of some medical experts. Toxicity A large overdose can cause asphyxia and death by respiratory depression if the person does not receive medical attention immediately. Overdose treatment includes the administration of naloxone. The latter completely reverses morphine's effects but may result in the immediate onset of withdrawal in opiate-addicted subjects. Multiple doses may be needed as the duration of action of morphine is longer than that of naloxone. Pharmacology Pharmacodynamics Due to its long history and established use as a pain medication, this compound has become the benchmark to which all other opioids are compared. It interacts predominantly with the μ–δ-opioid (Mu-Delta) receptor heteromer. The μ-binding sites are discretely distributed in the human brain, with high densities in the posterior amygdala, hypothalamus, thalamus, nucleus caudatus, putamen, and certain cortical areas. They are also found on the terminal axons of primary afferents within laminae I and II (substantia gelatinosa) of the spinal cord and in the spinal nucleus of the trigeminal nerve. Morphine is a phenanthrene opioid receptor agonist – its main effect is binding to and activating the μ-opioid receptor (MOR) in the central nervous system. Its intrinsic activity at the MOR is heavily dependent on the assay and tissue being tested; in some situations it is a full agonist while in others it can be a partial agonist or even antagonist. In clinical settings, morphine exerts its principal pharmacological effect on the central nervous system and gastrointestinal tract. Its primary actions of therapeutic value are analgesia and sedation. Activation of the MOR is associated with analgesia, sedation, euphoria, physical dependence, and respiratory depression. Morphine is also a κ-opioid receptor (KOR) and δ-opioid receptor (DOR) agonist. Activation of the KOR is associated with spinal analgesia, miosis (pinpoint pupils), and psychotomimetic effects. The DOR is thought to play a role in analgesia. Although morphine does not bind to the σ receptor, it has been shown that σ receptor agonists, such as (+)-pentazocine, inhibit morphine analgesia, and σ receptor antagonists enhance morphine analgesia, suggesting downstream involvement of the σ receptor in the actions of morphine. The effects of morphine can be countered with opioid receptor antagonists such as naloxone and naltrexone; the development of tolerance to morphine may be inhibited by NMDA receptor antagonists such as ketamine, dextromethorphan, and memantine. The rotation of morphine with chemically dissimilar opioids in the long-term treatment of pain will slow down the growth of tolerance in the longer run, particularly agents known to have significantly incomplete cross-tolerance with morphine such as levorphanol, ketobemidone, piritramide, and methadone and its derivatives; all of these drugs also have NMDA antagonist properties. It is believed that the strong opioid with the most incomplete cross-tolerance with morphine is either methadone or dextromoramide. Analgesia creation Morphine creates analgesia through the activation of a specific group of neurons in the rostral ventromedial medulla, called the "morphine ensemble." This ensemble includes glutamatergic neurons that project to the spinal cord, known as RVMBDNF neurons. These neurons connect to inhibitory neurons in the spinal cord, called SCGal neurons, which release the neurotransmitter GABA and the neuropeptide galanin. The inhibition of SCGal neurons is crucial for morphine's pain-relieving effects. Additionally, the neurotrophic factor BDNF, produced within the RVMBDNF neurons, is required for morphine's action. Increasing BDNF levels enhances morphine's analgesic effects, even at lower doses. Gene expression Studies have shown that morphine can alter the expression of several genes. A single injection of morphine has been shown to alter the expression of two major groups of genes, for proteins involved in mitochondrial respiration and for cytoskeleton-related proteins. Effects on the immune system Morphine has long been known to act on receptors expressed in cells of the central nervous system resulting in pain relief and analgesia. In the 1970s and '80s, evidence suggesting that people addicted to opioids show an increased risk of infection (such as increased pneumonia, tuberculosis, and HIV/AIDS) led scientists to believe that morphine may also affect the immune system. This possibility increased interest in the effect of chronic morphine use on the immune system. The first step in determining that morphine may affect the immune system was to establish that the opiate receptors known to be expressed on cells of the central nervous system are also expressed on cells of the immune system. One study successfully showed that dendritic cells, part of the innate immune system, display opiate receptors. Dendritic cells are responsible for producing cytokines, which are the tools for communication in the immune system. This same study showed that dendritic cells chronically treated with morphine during their differentiation produce more interleukin-12 (IL-12), a cytokine responsible for promoting the proliferation, growth, and differentiation of T-cells (another cell of the adaptive immune system) and less interleukin-10 (IL-10), a cytokine responsible for promoting a B-cell immune response (B cells produce antibodies to fight off infection). This regulation of cytokines appears to occur via the p38 MAPKs (mitogen-activated protein kinase)-dependent pathway. Usually, the p38 within the dendritic cell expresses TLR 4 (toll-like receptor 4), which is activated through the ligand LPS (lipopolysaccharide). This causes the p38 MAPK to be phosphorylated. This phosphorylation activates the p38 MAPK to begin producing IL-10 and IL-12. When the dendritic cells are chronically exposed to morphine during their differentiation process and then treated with LPS, the production of cytokines is different. Once treated with morphine, the p38 MAPK does not produce IL-10, instead favoring the production of IL-12. The exact mechanism through which the production of one cytokine is increased in favor over another is not known. Most likely, the morphine causes increased phosphorylation of the p38 MAPK. Transcriptional level interactions between IL-10 and IL-12 may further increase the production of IL-12 once IL-10 is not being produced. This increased production of IL-12 causes increased T-cell immune response. Further studies on the effects of morphine on the immune system have shown that morphine influences the production of neutrophils and other cytokines. Since cytokines are produced as part of the immediate immunological response (inflammation), it has been suggested that they may also influence pain. In this way, cytokines may be a logical target for analgesic development. Recently, one study has used an animal model (hind-paw incision) to observe the effects of morphine administration on the acute immunological response. Following the hind-paw incision, pain thresholds and cytokine production were measured. Normally, cytokine production in and around the wounded area increases to fight infection and control healing (and, possibly, to control pain), but pre-incisional morphine administration (0.1 mg/kg to 10.0 mg/kg) reduced the number of cytokines found around the wound in a dose-dependent manner. The authors suggest that morphine administration in the acute post-injury period may reduce resistance to infection and may impair the healing of the wound. Pharmacokinetics Absorption and metabolism Morphine can be taken orally, sublingually, bucally, rectally, subcutaneously, intranasally, intravenously, intrathecally or epidurally and inhaled via a nebulizer. As a recreational drug, it is becoming more common to inhale ("Chasing the Dragon"), but, for medical purposes, intravenous (IV) injection is the most common method of administration. Morphine is subject to extensive first-pass metabolism (a large proportion is broken down in the liver), so, if taken orally, only 40% to 50% of the dose reaches the central nervous system. Resultant plasma levels after subcutaneous (SC), intramuscular (IM), and IV injection are all comparable. After IM or SC injections, morphine plasma levels peak in approximately 20 min, and, after oral administration, levels peak in approximately 30 min. Morphine is metabolised primarily in the liver and approximately 87% of a dose of morphine is excreted in the urine within 72 h of administration. Morphine is metabolized primarily into morphine-3-glucuronide (M3G) and morphine-6-glucuronide (M6G) via glucuronidation by phase II metabolism enzyme UDP-glucuronosyl transferase-2B7 (UGT2B7). About 60% of morphine is converted to M3G, and 6% to 10% is converted to M6G. Not only does the metabolism occur in the liver but it may also take place in the brain and the kidneys. M3G does not undergo opioid receptor binding and has no analgesic effect. M6G binds to μ-receptors and is half as potent an analgesic as morphine in humans. Morphine may also be metabolized into small amounts of normorphine, codeine, and hydromorphone. Metabolism rate is determined by gender, age, diet, genetic makeup, disease state (if any), and use of other medications. The elimination half-life of morphine is approximately 120 min, though there may be slight differences between men and women. Morphine can be stored in fat, and, thus, can be detectable even after death. Morphine can cross the blood–brain barrier, but, because of poor lipid solubility, protein binding, rapid conjugation with glucuronic acid, and ionization, it does not cross easily. Heroin, which is derived from morphine, crosses the blood-brain barrier more easily, making it more potent. Extended-release There are extended-release formulations of orally administered morphine whose effect lasts longer, which can be given once per day. Brand names for this formulation of morphine include Avinza, Kadian, MS Contin, Dolcontin, and DepoDur. For constant pain, the relieving effect of extended-release morphine given once (for Kadian) or twice (for MS Contin) every 24 hours is roughly the same as multiple administrations of immediate release (or "regular") morphine. Extended-release morphine can be administered together with "rescue doses" of immediate-release morphine as needed in case of breakthrough pain, each generally consisting of 5% to 15% of the 24-hour extended-release dosage. Detection in body fluids Morphine and its major metabolites, morphine-3-glucuronide, and morphine-6-glucuronide, can be detected in blood, plasma, hair, and urine using an immunoassay. Chromatography can be used to test for each of these substances individually. Some testing procedures hydrolyze metabolic products into morphine before the immunoassay, which must be considered when comparing morphine levels in separately published results. Morphine can also be isolated from whole blood samples by solid phase extraction (SPE) and detected using liquid chromatography-mass spectrometry (LC-MS). Ingestion of codeine or food containing poppy seeds can cause false positives. A 1999 review estimated that relatively low doses of heroin (which metabolizes immediately into morphine) are detectable by standard urine tests for 1–1.5 days after use. A 2009 review determined that, when the analyte is morphine and the limit of detection is 1ng/ml, a 20mg intravenous (IV) dose of morphine is detectable for 12–24 hours. A limit of detection of 0.6ng/ml had similar results. Chirality and biological activity Morphine is a pentacyclic 3°amine (alkaloid) with 5 stereogenic centers and exists in 32 stereoisomeric forms. But the desired analgesic activity resides exclusively in the natural product, the (-)-enantiomer with the configuration (5R,6S,9R,13S,14R). Natural occurrence Morphine is the most abundant opiate found in opium, the dried latex extracted by shallowly scoring the unripe seedpods of the Papaver somniferum poppy. Morphine is generally 8–14% of the dry weight of opium. Przemko and Norman cultivars of the opium poppy, are used to produce two other alkaloids, thebaine and oripavine, which are used in the manufacture of semi-synthetic and synthetic opioids like oxycodone and etorphine. P. bracteatum does not contain morphine or codeine, or other narcotic phenanthrene-type, alkaloids. This species is rather a source of thebaine. Occurrence of morphine in other Papaverales and Papaveraceae, as well as in some species of hops and mulberry trees has not been confirmed. Morphine is produced most predominantly early in the life cycle of the plant. Past the optimum point for extraction, various processes in the plant produce codeine, thebaine, and in some cases negligible amounts of hydromorphone, dihydromorphine, dihydrocodeine, tetrahydro-thebaine, and hydrocodone (these compounds are rather synthesized from thebaine and oripavine). In the brains of mammals, morphine is detectable in trace steady-state concentrations. The human body also produces endorphins, which are chemically related endogenous opioid peptides that function as neuropeptides and have similar effects to morphine. Human biosynthesis Morphine is an endogenous opioid in humans. Various human cells are capable of synthesizing and releasing it, including white blood cells. The primary biosynthetic pathway for morphine in humans consists of L-tyrosine → para-tyramine or L-DOPA → Dopamine L-tyrosine → L-DOPA → 3,4-dihydroxyphenylacetaldehyde (DOPAL) Dopamine + DOPAL → (S)-norlaudanosoline →→→ (S)-reticuline → 1,2-dehydroreticulinium → (R)-reticuline → salutaridine → salutaridinol → thebaine → neopinone → codeinone → codeine → morphine The intermediate (S)-norlaudanosoline (also known as tetrahydropapaveroline) is synthesized through the addition of DOPAL and dopamine. CYP2D6, a cytochrome P450 isoenzyme is involved in two steps along the biosynthetic pathway, catalyzing both the biosynthesis of dopamine from tyramine and of morphine from codeine. Urinary concentrations of endogenous codeine and morphine have been found to significantly increase in individuals taking L-DOPA for the treatment of Parkinson's disease. Biosynthesis in the opium poppy Biosynthesis of morphine in the opium poppy begins with two tyrosine derivatives, dopamine and 4-hydroxyphenylacetaldehyde. Condensation of these precursors yields the primary intermediate higenamine (norcoclaurine). Subsequent action of four enzymes yields the tetrahydroisoquinoline reticuline, which is converted into salutaridine, thebaine, and oripavine. The enzymes involved in this process are the salutaridine synthase, salutaridine:NADPH 7-oxidoreductase and the codeinone reductase. Researchers are attempting to reproduce the biosynthetic pathway that produces morphine in genetically engineered yeast. In June 2015 the S-reticuline could be produced from sugar and R-reticuline could be converted to morphine, but the intermediate reaction could not be performed. In August 2015 the first complete synthesis of thebaine and hydrocodone in yeast was reported, but the process would need to be 100,000 times more productive to be suitable for commercial use. Chemistry Elements of the morphine structure have been used to create completely synthetic drugs such as the morphinan family (levorphanol, dextromethorphan and others) and other groups that have many members with morphine-like qualities. The modification of morphine and the aforementioned synthetics has also given rise to non-narcotic drugs with other uses such as emetics, stimulants, antitussives, anticholinergics, muscle relaxants, local anaesthetics, general anaesthetics, and others. Morphine-derived agonist–antagonist drugs have also been developed. Structure description Morphine is a benzylisoquinoline alkaloid with two additional ring closures. As Jack DeRuiter of the Department of Drug Discovery and Development (formerly, Pharmacal Sciences), Harrison School of Pharmacy, Auburn University stated in his Fall 2000 course notes for that earlier department's "Principles of Drug Action 2" course, "Examination of the morphine molecule reveals the following structural features important to its pharmacological profile... Morphine and most of its derivatives do not exhibit optical isomerism, although some more distant relatives like the morphinan series (levorphanol, dextrorphan, and the racemic parent chemical racemorphan) do, and as noted above stereoselectivity in vivo is an important issue. Uses and derivatives Most of the licit morphine produced is used to make codeine by methylation. It is also a precursor for many drugs including heroin (3,6-diacetylmorphine), hydromorphone (dihydromorphinone), and oxymorphone (14-hydroxydihydromorphinone). Most semi-synthetic opioids, both of the morphine and codeine subgroups, are created by modifying one or more of the following: Halogenating or making other modifications at positions 1 or 2 on the morphine carbon skeleton. The methyl group that makes morphine into codeine can be removed or added back, or replaced with another functional group like ethyl and others to make codeine analogues of morphine-derived drugs and vice versa. Codeine analogues of morphine-based drugs often serve as prodrugs of the stronger drug, as in codeine and morphine, hydrocodone and hydromorphone, oxycodone and oxymorphone, nicocodeine and nicomorphine, dihydrocodeine and dihydromorphine, etc. Saturating, opening, or other changes to the bond between positions 7 and 8, as well as adding, removing, or modifying functional groups to these positions; saturating, reducing, eliminating, or otherwise modifying the 7–8 bond and attaching a functional group at 14 yields hydromorphinol; the oxidation of the hydroxyl group to a carbonyl and changing the 7–8 bond to single from double changes codeine into oxycodone. Attachment, removal, or modification of functional groups to positions 3 or 6 (dihydrocodeine and related, hydrocodone, nicomorphine); in the case of moving the methyl functional group from position 3 to 6, codeine becomes heterocodeine, which is 72 times stronger, and therefore six times stronger than morphine Attachment of functional groups or other modification at position 14 (oxymorphone, oxycodone, naloxone) Modifications at positions 2, 4, 5, or 17, usually along with other changes to the molecule elsewhere on the morphine skeleton. Often this is done with drugs produced by catalytic reduction, hydrogenation, oxidation, or the like, producing strong derivatives of morphine and codeine. Many morphine derivatives can also be manufactured using thebaine or codeine as a starting material. Replacement of the N-methyl group of morphine with an N-phenylethyl group results in a product that is 18 times more powerful than morphine in its opiate agonist potency. Combining this modification with the replacement of the 6-hydroxyl with a 6-methylene group produces a compound some 1,443 times more potent than morphine, stronger than the Bentley compounds such as etorphine (M99, the Immobilon tranquilliser dart) by some measures. Closely related to morphine are the opioids morphine-N-oxide (genomorphine), which is a pharmaceutical that is no longer in common use; and pseudomorphine, an alkaloid that exists in opium, form as degradation products of morphine. As a result of the extensive study and use of this molecule, more than 250 morphine derivatives (also counting codeine and related drugs) have been developed since the last quarter of the 19th century. These drugs range from 25% the analgesic strength of codeine (or slightly more than 2% of the strength of morphine) to several thousand times the strength of morphine, to powerful opioid antagonists, including naloxone (Narcan), naltrexone (Trexan), diprenorphine (M5050, the reversing agent for the Immobilon dart) and nalorphine (Nalline). Some opioid agonist-antagonists, partial agonists, and inverse agonists are also derived from morphine. The receptor-activation profile of the semi-synthetic morphine derivatives varies widely and some, like apomorphine are devoid of narcotic effects. Chemical salts of Morphine Both morphine and its hydrated form are sparingly soluble in water. For this reason, pharmaceutical companies produce sulfate and hydrochloride salts of the drug, both of which are over 300 times more water-soluble than their parent molecule. Whereas the pH of a saturated morphine hydrate solution is 8.5, the salts are acidic. Since they derive from a strong acid but weak base, they are both at about pH = 5; as a consequence, the morphine salts are mixed with small amounts of NaOH to make them suitable for injection. Many salts of morphine are used, with the most common in current clinical use being the hydrochloride, sulfate, tartrate, and citrate; less commonly methobromide, hydrobromide, hydroiodide, lactate, chloride, and bitartrate and the others listed below. Morphine diacetate (heroin) is not a salt, but rather a further derivative, see above. Morphine meconate is a major form of the alkaloid in the poppy, as is morphine pectinate, nitrate, sulfate, and some others. Like codeine, dihydrocodeine and other (especially older) opiates, morphine has been used as the salicylate salt by some suppliers and can be easily compounded, imparting the therapeutic advantage of both the opioid and the NSAID; multiple barbiturate salts of morphine were also used in the past, as was/is morphine valerate, the salt of the acid being the active principle of valerian. Calcium morphenate is the intermediate in various latex and poppy-straw methods of morphine production, more rarely sodium morphenate takes its place. Morphine ascorbate and other salts such as the tannate, citrate, and acetate, phosphate, valerate and others may be present in poppy tea depending on the method of preparation. The salts listed by the United States Drug Enforcement Administration for reporting purposes, in addition to a few others, are as follows: Production In the opium poppy, the alkaloids are bound to meconic acid. The method is to extract from the crushed plant with diluted sulfuric acid, which is a stronger acid than meconic acid, but not so strong to react with alkaloid molecules. The extraction is performed in many steps (one amount of crushed plant is extracted at least six to ten times, so practically every alkaloid goes into the solution). From the solution obtained at the last extraction step, the alkaloids are precipitated by either ammonium hydroxide or sodium carbonate. The last step is purifying and separating morphine from other opium alkaloids. The somewhat similar Gregory process was developed in the United Kingdom during the Second World War, which begins with stewing the entire plant, in most cases save the roots and leaves, in plain or mildly acidified water, then proceeding through steps of concentration, extraction, and purification of alkaloids. Other methods of processing "poppy straw" (i.e., dried pods and stalks) use steam, one or more of several types of alcohol, or other organic solvents. The poppy straw methods predominate in Continental Europe and the British Commonwealth, with the latex method in most common use in India. The latex method can involve either vertical or horizontal slicing of the unripe pods with a two-to five-bladed knife with a guard developed specifically for this purpose to the depth of a fraction of a millimetre and scoring of the pods can be done up to five times. An alternative latex method sometimes used in China in the past is to cut off the poppy heads, run a large needle through them, and collect the dried latex 24 to 48 hours later. In India, opium harvested by licensed poppy farmers is dehydrated to uniform levels of hydration at government processing centers and then sold to pharmaceutical companies that extract morphine from the opium. However, in Turkey and Tasmania, morphine is obtained by harvesting and processing the fully mature dry seed pods with attached stalks, called poppy straw. In Turkey, a water extraction process is used, while in Tasmania, a solvent extraction process is used. Opium poppy contains at least 50 different alkaloids, but most of them are of very low concentration. Morphine is the principal alkaloid in raw opium and constitutes roughly 8–19% of opium by dry weight (depending on growing conditions). Some purpose-developed strains of poppy now produce opium that is up to 26% morphine by weight. A rough rule of thumb to determine the morphine content of pulverised dried poppy straw is to divide the percentage expected for the strain or crop via the latex method by eight or an empirically determined factor, which is often in the range of 5 to 15. The Norman strain of P. somniferum, also developed in Tasmania, produces down to 0.04% morphine but with much higher amounts of thebaine and oripavine, which can be used to synthesise semi-synthetic opioids as well as other drugs like stimulants, emetics, opioid antagonists, anticholinergics, and smooth-muscle agents. In the 1950s and 1960s, Hungary supplied nearly 60% of Europe's total medication-purpose morphine production. To this day, poppy farming is legal in Hungary, but poppy farms are limited by law to . It is also legal to sell dried poppies in flower shops for use in floral arrangements. It was announced in 1973 that a team at the National Institutes of Health in the United States had developed a method for total synthesis of morphine, codeine, and thebaine using coal tar as a starting material. A shortage in codeine-hydrocodone class cough suppressants (all of which can be made from morphine in one or more steps, as well as from codeine or thebaine) was the initial reason for the research. Most morphine produced for pharmaceutical use around the world is converted into codeine as the concentration of the latter in both raw opium and poppy straw is much lower than that of morphine; in most countries, the usage of codeine (both as end-product and precursor) is at least equal or greater than that of morphine on a weight basis. Chemical synthesis The first morphine total synthesis, devised by Marshall D. Gates, Jr. in 1952, remains a widely used example of total synthesis. Several other syntheses were reported, notably by the research groups of Rice, Evans, Fuchs, Parker, Overman, Mulzer-Trauner, White, Taber, Trost, Fukuyama, Guillou, and Stork. Because of the stereochemical complexity and consequent synthetic challenge presented by this polycyclic structure, Michael Freemantle has expressed the view that it is "highly unlikely" that a chemical synthesis will ever be cost-effective such that it could compete with the cost of producing morphine from the opium poppy. GMO synthesis Research Thebaine has been produced by GMO E. coli. Precursor to other opioids Pharmaceutical Morphine is a precursor in the manufacture of several opioids such as dihydromorphine, hydromorphone, hydrocodone, and oxycodone as well as codeine, which itself has a large family of semi-synthetic derivatives. Illicit Illicit morphine is produced, though rarely, from codeine found in over-the-counter cough and pain medicines. Another illicit source is morphine extracted from extended-release morphine products. Chemical reactions can then be used to convert morphine, dihydromorphine, and hydrocodone into heroin or other opioids [e.g., diacetyldihydromorphine (Paralaudin), and thebacon]. Other clandestine conversions—of morphine, into ketones of the hydromorphone class, or other derivatives like dihydromorphine (Paramorfan), desomorphine (Permonid), metopon, etc., and of codeine into hydrocodone (Dicodid), dihydrocodeine (Paracodin), etc. —require greater expertise, and types and quantities of chemicals and equipment that are more difficult to source, and so are more rarely used, illicitly (but cases have been recorded). History The earliest known reference to morphine can be traced back to Theophrastus in the 3rd century BC, however, possible references to morphine may go as far back as 2100 BC as Sumerian clay tablets which records lists of medical prescriptions including opium-based cures. An opium-based elixir has been ascribed to alchemists of Byzantine times, but the specific formula was lost during the Ottoman conquest of Constantinople (Istanbul). Around 1522, Paracelsus made reference to an opium-based elixir that he called laudanum from the Latin word laudāre, meaning "to praise". He described it as a potent painkiller but recommended that it be used sparingly. The recipe given differs substantially from that of modern-day laudanum. Morphine was discovered as the first active alkaloid extracted from the opium poppy plant in December 1804 in Paderborn by German pharmacist Friedrich Sertürner. In 1817, Sertürner reported experiments in which he administered morphine to himself, three young boys, three dogs, and a mouse; all four people almost died. Sertürner originally named the substance morphium after the Greek god of dreams, Morpheus, as it has a tendency to cause sleep. Sertürner's morphium was six times stronger than opium. He hypothesized that, because lower doses of the drug were needed, it would be less addictive. However, Sertürner became addicted to the drug, warning that "I consider it my duty to attract attention to the terrible effects of this new substance I called morphium in order that calamity may be averted." The drug was first marketed to the general public by Sertürner and Company in 1817 as a pain medication, and also as a treatment for opium and alcohol addiction. It was first used as a poison in 1822 when Edme Castaing of France was convicted of murdering a patient. Commercial production began in Darmstadt, Germany, in 1827 by the pharmacy that became the pharmaceutical company Merck, with morphine sales being a large part of their early growth. In the 1850s, Alexander Wood reported that he had injected morphine into his wife Rebecca as an experiment; the myth goes that this killed her because of respiratory depression, but she outlived her husband by ten years. Later it was found that morphine was more addictive than either alcohol or opium, and its extensive use during the American Civil War allegedly resulted in over 400,000 people with the "soldier's disease" of morphine addiction. This idea has been a subject of controversy, as there have been suggestions that such a disease was in fact a fabrication; the first documented use of the phrase "soldier's disease" was in 1915. Diacetylmorphine (better known as heroin) was synthesized from morphine in 1874 and brought to market by Bayer in 1898. Heroin is approximately 1.5 to 2 times more potent than morphine weight for weight. Due to the lipid solubility of diacetylmorphine, it can cross the blood–brain barrier faster than morphine, subsequently increasing the reinforcing component of addiction. Using a variety of subjective and objective measures, one study estimated the relative potency of heroin to morphine administered intravenously to post-addicts to be 1.80–2.66 mg of morphine sulfate to 1 mg of diamorphine hydrochloride (heroin). Morphine became a controlled substance in the US under the Harrison Narcotics Tax Act of 1914, and possession without a prescription in the US is a criminal offense. Morphine was the most commonly abused narcotic analgesic in the world until heroin was synthesized and came into use. In general, until the synthesis of dihydromorphine (), the dihydromorphinone class of opioids (1920s), and oxycodone (1916) and similar drugs, there were no other drugs in the same efficacy range as opium, morphine, and heroin, with synthetics still several years away (pethidine was invented in Germany in 1937) and opioid agonists among the semi-synthetics were analogues and derivatives of codeine such as dihydrocodeine (Paracodin), ethylmorphine (Dionine), and benzylmorphine (Peronine). Even today, morphine is the most sought-after prescription narcotic by heroin addicts when heroin is scarce, all other things being equal; local conditions and user preference may cause hydromorphone, oxymorphone, high-dose oxycodone, or methadone as well as dextromoramide in specific instances such as 1970s Australia, to top that particular list. The stop-gap drugs used by the largest absolute number of heroin addicts is probably codeine, with significant use also of dihydrocodeine, poppy straw derivatives like poppy pod and poppy seed tea, propoxyphene, and tramadol. The structural formula of morphine was determined by 1925 by Robert Robinson. At least three methods of total synthesis of morphine from starting materials such as coal tar and petroleum distillates have been patented, the first of which was announced in 1952, by Marshall D. Gates, Jr. at the University of Rochester. Still, the vast majority of morphine is derived from the opium poppy by either the traditional method of gathering latex from the scored, unripe pods of the poppy, or processes using poppy straw, the dried pods and stems of the plant, the most widespread of which was invented in Hungary in 1925 and announced in 1930 by Hungarian pharmacologist János Kabay. In 2003, there was a discovery of endogenous morphine occurring naturally in the human body. Thirty years of speculation were made on this subject because there was a receptor that, it appeared, reacted only to morphine: the μ3-opioid receptor in human tissue. Human cells that form in reaction to cancerous neuroblastoma cells have been found to contain trace amounts of endogenous morphine. Society and culture Legal status In Australia, morphine is classified as a Schedule 8 drug under the variously titled State and Territory Poisons Acts. In Canada, morphine is classified as a Schedule I drug under the Controlled Drugs and Substances Act. In France, morphine is in the strictest schedule of controlled substances, based upon the December 1970 French controlled substances law. In Germany, morphine is a verkehrsfähiges und verschreibungsfähiges Betäubungsmittel listed under Anlage III (the equivalent of CSA Schedule II) of the Betäubungsmittelgesetz. In Switzerland, morphine is scheduled similarly to Germany's legal classification of the drug. In Japan, morphine is classified as a narcotic under the Narcotics and Psychotropics Control Act (, mayaku oyobi kōseishinyaku torishimarihō). In the Netherlands, morphine is classified as a List 1 drug under the Opium Law. In New Zealand, morphine is classified as a Class B drug under the Misuse of Drugs Act 1975. In the United Kingdom, morphine is listed as a Class A drug under the Misuse of Drugs Act 1971 and a Schedule 2 Controlled Drug under the Misuse of Drugs Regulations 2001. In the United States, morphine is classified as a Schedule II controlled substance under the Controlled Substances Act under main Administrative Controlled Substances Code Number 9300. Morphine pharmaceuticals are subject to annual manufacturing quotas; in 2017 these quotas were 35.0 tonnes of production for sale, and 27.3 tonnes of production as an intermediate, or chemical precursor, for conversion into other drugs. Morphine produced for use in extremely dilute formulations is excluded from the manufacturing quota. Internationally (UN), morphine is a Schedule I drug under the Single Convention on Narcotic Drugs. Non-medical use The euphoria, comprehensive alleviation of distress and therefore all aspects of suffering, promotion of sociability and empathy, "body high", and anxiolysis provided by narcotic drugs including opioids can cause the use of high doses in the absence of pain for a protracted period, which can impart a craving for the drug in the user. As the prototype of the entire opioid class of drugs, morphine has properties that may lead to its misuse. Morphine addiction is the model upon which the current perception of addiction is based. Animal and human studies and clinical experience back up the contention that morphine is one of the most euphoric drugs known, and via all but the IV route heroin and morphine cannot be distinguished according to studies because heroin is a prodrug for the delivery of systemic morphine. Chemical changes to the morphine molecule yield other euphorigenics such as dihydromorphine, hydromorphone (Dilaudid, Hydal), and oxymorphone (Numorphan, Opana), as well as the latter three's methylated equivalents dihydrocodeine, hydrocodone, and oxycodone, respectively; in addition to heroin, there are dipropanoylmorphine, diacetyldihydromorphine, and other members of the 3,6 morphine diester category like nicomorphine and other similar semi-synthetic opiates like desomorphine, hydromorphinol, etc. used clinically in many countries of the world but also produced illicitly in rare instances. In general, non-medical use of morphine entails taking more than prescribed or outside of medical supervision, injecting oral formulations, mixing it with unapproved potentiators such as alcohol, cocaine, and the like, or defeating the extended-release mechanism by chewing the tablets or turning into a powder for snorting or preparing injectables. The latter method can be as time-consuming and involved as traditional methods of smoking opium. This and the fact that the liver destroys a large percentage of the drug on the first pass impacts the demand side of the equation for clandestine re-sellers, as many customers are not needle users and may have been disappointed with ingesting the drug orally. As morphine is generally as hard or harder to divert than oxycodone in a lot of cases, morphine in any form is uncommon on the street, although ampoules and phials of morphine injection, pure pharmaceutical morphine powder, and soluble multi-purpose tablets are very popular where available. Morphine is also available in a paste that is used in the production of heroin, which can be smoked by itself or turned into a soluble salt and injected; the same goes for the penultimate products of the Kompot (Polish Heroin) and black tar processes. Poppy straw as well as opium can yield morphine of purity levels ranging from poppy tea to near-pharmaceutical-grade morphine by itself or with all of the more than 50 other alkaloids. It also is the active narcotic ingredient in opium and all of its forms, derivatives, and analogues as well as forming from the breakdown of heroin and otherwise present in many batches of illicit heroin as the result of incomplete acetylation. Names Morphine is marketed under many different brand names in various parts of the world. It was formerly called Morphia in British English. Informal names for morphine include: Cube Juice, Dope, Dreamer, Emsel, First Line, God's Drug, Hard Stuff, Hocus, Hows, Lydia, Lydic, M, Miss Emma, Mister Blue, Monkey, Morf, Morph, Morphide, Morphie, Morpho, Mother, MS, Ms. Emma, Mud, New Jack Swing (if mixed with heroin), Sister, Tab, Unkie, Unkie White, and Stuff. MS Contin tablets are known as misties, and the 100 mg extended-release tablets as greys and blockbusters. The "speedball" can use morphine as the opioid component, which is combined with cocaine, amphetamines, methylphenidate, or similar drugs. "Blue Velvet" is a combination of morphine with the antihistamine tripelennamine (Pyrabenzamine, PBZ, Pelamine) taken by injection. Access in developing countries Although morphine is cheap, people in poorer countries often do not have access to it. According to a 2005 estimate by the International Narcotics Control Board, six countries (Australia, Canada, France, Germany, the United Kingdom, and the United States) consume 79% of the world's morphine. The less affluent countries, accounting for 80% of the world's population, consumed only about 6% of the global morphine supply. Some countries import virtually no morphine, and in others the drug is rarely available even for relieving severe pain while dying. Experts in pain management attribute the under-distribution of morphine to an unwarranted fear of the drug's potential for addiction and abuse. While morphine is clearly addictive, Western doctors believe it is worthwhile to use the drug and then wean the patient off when the treatment is over.
Biology and health sciences
Biochemistry and molecular biology
null
20616
https://en.wikipedia.org/wiki/Mechanical%20advantage
Mechanical advantage
Mechanical advantage is a measure of the force amplification achieved by using a tool, mechanical device or machine system. The device trades off input forces against movement to obtain a desired amplification in the output force. The model for this is the law of the lever. Machine components designed to manage forces and movement in this way are called mechanisms. An ideal mechanism transmits power without adding to or subtracting from it. This means the ideal machine does not include a power source, is frictionless, and is constructed from rigid bodies that do not deflect or wear. The performance of a real system relative to this ideal is expressed in terms of efficiency factors that take into account departures from the ideal. Levers The lever is a movable bar that pivots on a fulcrum attached to or positioned on or across a fixed point. The lever operates by applying forces at different distances from the fulcrum, or pivot. The location of the fulcrum determines a lever's class. Where a lever rotates continuously, it functions as a rotary second-class lever. The motion of the lever's end-point describes a fixed orbit, where mechanical energy can be exchanged. (see a hand-crank as an example.) In modern times, this kind of rotary leverage is widely used; see a (rotary) 2nd-class lever; see gears, pulleys or friction drive, used in a mechanical power transmission scheme. It is common for mechanical advantage to be manipulated in a 'collapsed' form, via the use of more than one gear (a gearset). In such a gearset, gears having smaller radii and less inherent mechanical advantage are used. In order to make use of non-collapsed mechanical advantage, it is necessary to use a 'true length' rotary lever. See, also, the incorporation of mechanical advantage into the design of certain types of electric motors; one design is an 'outrunner'. As the lever pivots on the fulcrum, points farther from this pivot move faster than points closer to the pivot. The power into and out of the lever is the same, so must come out the same when calculations are being done. Power is the product of force and velocity, so forces applied to points farther from the pivot must be less than when applied to points closer in. If a and b are distances from the fulcrum to points A and B and if force FA applied to A is the input force and FB exerted at B is the output, the ratio of the velocities of points A and B is given by so the ratio of the output force to the input force, or mechanical advantage, is given by This is the law of the lever, which Archimedes formulated using geometric reasoning. It shows that if the distance a from the fulcrum to where the input force is applied (point A) is greater than the distance b from fulcrum to where the output force is applied (point B), then the lever amplifies the input force. If the distance from the fulcrum to the input force is less than from the fulcrum to the output force, then the lever reduces the input force. To Archimedes, who recognized the profound implications and practicalities of the law of the lever, has been attributed the famous claim, "Give me a place to stand and with a lever I will move the whole world." The use of velocity in the static analysis of a lever is an application of the principle of virtual work. Speed ratio The requirement for power input to an ideal mechanism to equal power output provides a simple way to compute mechanical advantage from the input-output speed ratio of the system. The power input to a gear train with a torque TA applied to the drive pulley which rotates at an angular velocity of ωA is P=TAωA. Because the power flow is constant, the torque TB and angular velocity ωB of the output gear must satisfy the relation which yields This shows that for an ideal mechanism the input-output speed ratio equals the mechanical advantage of the system. This applies to all mechanical systems ranging from robots to linkages. Gear trains Gear teeth are designed so that the number of teeth on a gear is proportional to the radius of its pitch circle, and so that the pitch circles of meshing gears roll on each other without slipping. The speed ratio for a pair of meshing gears can be computed from ratio of the radii of the pitch circles and the ratio of the number of teeth on each gear, its gear ratio. The velocity v of the point of contact on the pitch circles is the same on both gears, and is given by where input gear A has radius rA and meshes with output gear B of radius rB, therefore, where NA is the number of teeth on the input gear and NB is the number of teeth on the output gear. The mechanical advantage of a pair of meshing gears for which the input gear has NA teeth and the output gear has NB teeth is given by This shows that if the output gear GB has more teeth than the input gear GA, then the gear train amplifies the input torque. And, if the output gear has fewer teeth than the input gear, then the gear train reduces the input torque. If the output gear of a gear train rotates more slowly than the input gear, then the gear train is called a speed reducer (Force multiplier). In this case, because the output gear must have more teeth than the input gear, the speed reducer will amplify the input torque. Chain and belt drives Mechanisms consisting of two sprockets connected by a chain, or two pulleys connected by a belt are designed to provide a specific mechanical advantage in power transmission systems. The velocity v of the chain or belt is the same when in contact with the two sprockets or pulleys: where the input sprocket or pulley A meshes with the chain or belt along the pitch radius rA and the output sprocket or pulley B meshes with this chain or belt along the pitch radius rB, therefore where NA is the number of teeth on the input sprocket and NB is the number of teeth on the output sprocket. For a toothed belt drive, the number of teeth on the sprocket can be used. For friction belt drives the pitch radius of the input and output pulleys must be used. The mechanical advantage of a pair of a chain drive or toothed belt drive with an input sprocket with NA teeth and the output sprocket has NB teeth is given by The mechanical advantage for friction belt drives is given by Chains and belts dissipate power through friction, stretch and wear, which means the power output is actually less than the power input, which means the mechanical advantage of the real system will be less than that calculated for an ideal mechanism. A chain or belt drive can lose as much as 5% of the power through the system in friction heat, deformation and wear, in which case the efficiency of the drive is 95%. Example: bicycle chain drive Consider the 18-speed bicycle with 7 in (radius) cranks and 26 in (diameter) wheels. If the sprockets at the crank and at the rear drive wheel are the same size, then the ratio of the output force on the tire to the input force on the pedal can be calculated from the law of the lever to be Now, assume that the front sprockets have a choice of 28 and 52 teeth, and that the rear sprockets have a choice of 16 and 32 teeth. Using different combinations, we can compute the following speed ratios between the front and rear sprockets The ratio of the force driving the bicycle to the force on the pedal, which is the total mechanical advantage of the bicycle, is the product of the speed ratio (or teeth ratio of output sprocket/input sprocket) and the crank-wheel lever ratio. Notice that in every case the force on the pedals is greater than the force driving the bicycle forward (in the illustration above, the corresponding backward-directed reaction force on the ground is indicated). Block and tackle A block and tackle is an assembly of a rope and pulleys that is used to lift loads. A number of pulleys are assembled together to form the blocks, one that is fixed and one that moves with the load. The rope is threaded through the pulleys to provide mechanical advantage that amplifies that force applied to the rope. In order to determine the mechanical advantage of a block and tackle system consider the simple case of a gun tackle, which has a single mounted, or fixed, pulley and a single movable pulley. The rope is threaded around the fixed block and falls down to the moving block where it is threaded around the pulley and brought back up to be knotted to the fixed block. Let S be the distance from the axle of the fixed block to the end of the rope, which is A where the input force is applied. Let R be the distance from the axle of the fixed block to the axle of the moving block, which is B where the load is applied. The total length of the rope L can be written as where K is the constant length of rope that passes over the pulleys and does not change as the block and tackle moves. The velocities VA and VB of the points A and B are related by the constant length of the rope, that is or The negative sign shows that the velocity of the load is opposite to the velocity of the applied force, which means as we pull down on the rope the load moves up. Let VA be positive downwards and VB be positive upwards, so this relationship can be written as the speed ratio where 2 is the number of rope sections supporting the moving block. Let FA be the input force applied at A the end of the rope, and let FB be the force at B on the moving block. Like the velocities FA is directed downwards and FB is directed upwards. For an ideal block and tackle system there is no friction in the pulleys and no deflection or wear in the rope, which means the power input by the applied force FAVA must equal the power out acting on the load FBVB, that is The ratio of the output force to the input force is the mechanical advantage of an ideal gun tackle system, This analysis generalizes to an ideal block and tackle with a moving block supported by n rope sections, This shows that the force exerted by an ideal block and tackle is n times the input force, where n is the number of sections of rope that support the moving block. Efficiency Mechanical advantage that is computed using the assumption that no power is lost through deflection, friction and wear of a machine is the maximum performance that can be achieved. For this reason, it is often called the ideal mechanical advantage (IMA). In operation, deflection, friction and wear will reduce the mechanical advantage. The amount of this reduction from the ideal to the actual mechanical advantage (AMA) is defined by a factor called efficiency, a quantity which is determined by experimentation. As an example, using a block and tackle with six rope sections and a load, the operator of an ideal system would be required to pull the rope six feet and exert of force to lift the load one foot. Both the ratios Fout / Fin and Vin / Vout show that the IMA is six. For the first ratio, of force input results in of force out. In an actual system, the force out would be less than 600 pounds due to friction in the pulleys. The second ratio also yields a MA of 6 in the ideal case but a smaller value in the practical scenario; it does not properly account for energy losses such as rope stretch. Subtracting those losses from the IMA or using the first ratio yields the AMA. Ideal mechanical advantage The ideal mechanical advantage (IMA), or theoretical mechanical advantage, is the mechanical advantage of a device with the assumption that its components do not flex, there is no friction, and there is no wear. It is calculated using the physical dimensions of the device and defines the maximum performance the device can achieve. The assumptions of an ideal machine are equivalent to the requirement that the machine does not store or dissipate energy; the power into the machine thus equals the power out. Therefore, the power P is constant through the machine and force times velocity into the machine equals the force times velocity outthat is, The ideal mechanical advantage is the ratio of the force out of the machine (load) to the force into the machine (effort), or Applying the constant power relationship yields a formula for this ideal mechanical advantage in terms of the speed ratio: The speed ratio of a machine can be calculated from its physical dimensions. The assumption of constant power thus allows use of the speed ratio to determine the maximum value for the mechanical advantage. Actual mechanical advantage The actual mechanical advantage (AMA) is the mechanical advantage determined by physical measurement of the input and output forces. Actual mechanical advantage takes into account energy loss due to deflection, friction, and wear. The AMA of a machine is calculated as the ratio of the measured force output to the measured force input, where the input and output forces are determined experimentally. The ratio of the experimentally determined mechanical advantage to the ideal mechanical advantage is the mechanical efficiency η of the machine,
Physical sciences
Basics_4
Physics
20621
https://en.wikipedia.org/wiki/Microtubule
Microtubule
Microtubules are polymers of tubulin that form part of the cytoskeleton and provide structure and shape to eukaryotic cells. Microtubules can be as long as 50 micrometres, as wide as 23 to 27 nm and have an inner diameter between 11 and 15 nm. They are formed by the polymerization of a dimer of two globular proteins, alpha and beta tubulin into protofilaments that can then associate laterally to form a hollow tube, the microtubule. The most common form of a microtubule consists of 13 protofilaments in the tubular arrangement. Microtubules play an important role in a number of cellular processes. They are involved in maintaining the structure of the cell and, together with microfilaments and intermediate filaments, they form the cytoskeleton. They also make up the internal structure of cilia and flagella. They provide platforms for intracellular transport and are involved in a variety of cellular processes, including the movement of secretory vesicles, organelles, and intracellular macromolecular assemblies. They are also involved in cell division (by mitosis and meiosis) and are the main constituents of mitotic spindles, which are used to pull eukaryotic chromosomes apart. Microtubules are nucleated and organized by microtubule-organizing centres, such as the centrosome found in the center of many animal cells or the basal bodies of cilia and flagella, or the spindle pole bodies found in most fungi. There are many proteins that bind to microtubules, including the motor proteins dynein and kinesin, microtubule-severing proteins like katanin, and other proteins important for regulating microtubule dynamics. Recently an actin-like protein has been found in the gram-positive bacterium Bacillus thuringiensis, which forms a microtubule-like structure called a nanotubule, involved in plasmid segregation. Other bacterial microtubules have a ring of five protofilaments. History Tubulin and microtubule-mediated processes, like cell locomotion, were seen by early microscopists, like Leeuwenhoek (1677). However, the fibrous nature of flagella and other structures were discovered two centuries later, with improved light microscopes, and confirmed in the 20th century with the electron microscope and biochemical studies. In vitro assays for microtubule motor proteins such as dynein and kinesin are researched by fluorescently tagging a microtubule and fixing either the microtubule or motor proteins to a microscope slide, then visualizing the slide with video-enhanced microscopy to record the travel of the motor proteins. This allows the movement of the motor proteins along the microtubule or the microtubule moving across the motor proteins. Consequently, some microtubule processes can be determined by kymograph. Structure In eukaryotes, microtubules are long, hollow cylinders made up of polymerized α- and β-tubulin dimers. The inner space of the hollow microtubule cylinders is referred to as the lumen. The α and β-tubulin subunits are ~50% identical at the amino acid level, and both have a molecular weight of approximately 50 kDa. These α/β-tubulin dimers polymerize end-to-end into linear protofilaments that associate laterally to form a single microtubule, which can then be extended by the addition of more α/β-tubulin dimers. Typically, microtubules are formed by the parallel association of thirteen protofilaments, although microtubules composed of fewer or more protofilaments have been observed in various species  as well as in vitro. Microtubules have a distinct polarity that is critical for their biological function. Tubulin polymerizes end to end, with the β-subunits of one tubulin dimer contacting the α-subunits of the next dimer. Therefore, in a protofilament, one end will have the α-subunits exposed while the other end will have the β-subunits exposed. These ends are designated the (−) and (+) ends, respectively. The protofilaments bundle parallel to one another with the same polarity, so, in a microtubule, there is one end, the (+) end, with only β-subunits exposed, while the other end, the (−) end, has only α-subunits exposed. While microtubule elongation can occur at both the (+) and (−) ends, it is significantly more rapid at the (+) end. The lateral association of the protofilaments generates a pseudo-helical structure, with one turn of the helix containing 13 tubulin dimers, each from a different protofilament. In the most common "13-3" architecture, the 13th tubulin dimer interacts with the next tubulin dimer with a vertical offset of 3 tubulin monomers due to the helicity of the turn. There are other alternative architectures, such as 11-3, 12-3, 14-3, 15-4, or 16-4, that have been detected at a much lower occurrence. Microtubules can also morph into other forms such as helical filaments, which are observed in protist organisms like foraminifera. There are two distinct types of interactions that can occur between the subunits of lateral protofilaments within the microtubule called the A-type and B-type lattices. In the A-type lattice, the lateral associations of protofilaments occur between adjacent α and β-tubulin subunits (i.e. an α-tubulin subunit from one protofilament interacts with a β-tubulin subunit from an adjacent protofilament). In the B-type lattice, the α and β-tubulin subunits from one protofilament interact with the α and β-tubulin subunits from an adjacent protofilament, respectively. Experimental studies have shown that the B-type lattice is the primary arrangement within microtubules. However, in most microtubules there is a seam in which tubulin subunits interact α-β. The sequence and exact composition of molecules during microtubule formation can thus be summarised as follows: A β-tubulin connects in the context of a non-existent covalent bond with an α-tubulin, which in connected form are a heterodimer, since they consist of two different polypeptides (β-tubulin and α-tubulin). So after the heterodimers are formed, they join together to form long chains that rise figuratively in one direction (e.g. upwards). These heterodimers, which are connected in a certain direction, form protofilaments. These long chains (protofilaments) now gradually accumulate next to each other so that a tube-like structure is formed, which has a lumen typical of a tube. Accordingly, mostly 13 protofilaments form the outer wall of the microtubules. The heterodimers consist of a positive and negative end, with alpha-tubulin forming the negative end and beta-tubulin the positive end. Due to the fact that the heterodimers are stacked on top of each other, there is always a negative and positive end. Microtubules grow by an addition of heterodimers at the plus end. Some species of Prosthecobacter also contain microtubules. The structure of these bacterial microtubules is similar to that of eukaryotic microtubules, consisting of a hollow tube of protofilaments assembled from heterodimers of bacterial tubulin A (BtubA) and bacterial tubulin B (BtubB). Both BtubA and BtubB share features of both α- and β-tubulin. Unlike eukaryotic microtubules, bacterial microtubules do not require chaperones to fold. In contrast to the 13 protofilaments of eukaryotic microtubules, bacterial microtubules comprise only five. Intracellular organization Microtubules are part of the cytoskeleton, a structural network within the cell's cytoplasm. The roles of the microtubule cytoskeleton include mechanical support, organization of the cytoplasm, transport, motility and chromosome segregation. In developing neurons microtubules are known as neurotubules, and they can modulate the dynamics of actin, another component of the cytoskeleton. A microtubule is capable of growing and shrinking in order to generate force, and there are motor proteins such as kinesins and dynein that allow organelles and other cellular components (such as mRNA) to be carried along a microtubule, using specific adaptor proteins . This combination of roles makes microtubules important for organizing and moving intracellular constituents/cargo. The organization of microtubules in the cell is cell-type specific. In epithelia, the minus-ends of the microtubule polymer are anchored near the site of cell-cell contact and organized along the apical-basal axis. After nucleation, the minus-ends are released and then re-anchored in the periphery by factors such as ninein and PLEKHA7. In this manner, they can facilitate the transport of proteins, vesicles and organelles along the apical-basal axis of the cell. In fibroblasts and other mesenchymal cell-types, microtubules are anchored at the centrosome and radiate with their plus-ends outwards towards the cell periphery (as shown in the first figure). In these cells, the microtubules play important roles in cell migration. Moreover, the polarity of microtubules is acted upon by motor proteins, which organize many components of the cell, including the endoplasmic reticulum and the Golgi apparatus. Microtubule polymerization Nucleation Nucleation is the event that initiates the formation of microtubules from the tubulin dimer. Microtubules are typically nucleated and organized by organelles called microtubule-organizing centers (MTOCs). Contained within the MTOC is another type of tubulin, γ-tubulin, which is distinct from the α- and β-subunits of the microtubules themselves. The γ-tubulin combines with several other associated proteins to form a lock washer-like structure known as the "γ-tubulin ring complex" (γ-TuRC). This complex acts as a template for α/β-tubulin dimers to begin polymerization; it acts as a cap of the (−) end while microtubule growth continues away from the MTOC in the (+) direction. The centrosome is the primary MTOC of most cell types. However, microtubules can be nucleated from other sites as well. For example, cilia and flagella have MTOCs at their base termed basal bodies. In addition, work from the Kaverina group at Vanderbilt, as well as others, suggests that the Golgi apparatus can serve as an important platform for the nucleation of microtubules. Because nucleation from the centrosome is inherently symmetrical, Golgi-associated microtubule nucleation may allow the cell to establish asymmetry in the microtubule network. In recent studies, the Vale group at UCSF identified the protein complex augmin as a critical factor for centrosome-dependent, spindle-based microtubule generation. It that has been shown to interact with γ-TuRC and increase microtubule density around the mitotic spindle origin. Some cell types, such as plant cells, do not contain well defined MTOCs. In these cells, microtubules are nucleated from discrete sites in the cytoplasm. Other cell types, such as trypanosomatid parasites, have a MTOC but it is permanently found at the base of a flagellum. Here, nucleation of microtubules for structural roles and for generation of the mitotic spindle is not from a canonical centriole-like MTOC. Polymerization Following the initial nucleation event, tubulin monomers must be added to the growing polymer. The process of adding or removing monomers depends on the concentration of αβ-tubulin dimers in solution in relation to the critical concentration, which is the steady state concentration of dimers at which there is no longer any net assembly or disassembly at the end of the microtubule. If the dimer concentration is greater than the critical concentration, the microtubule will polymerize and grow. If the concentration is less than the critical concentration, the length of the microtubule will decrease. Microtubule dynamics Dynamic instability Dynamic instability refers to the coexistence of assembly and disassembly at the ends of a microtubule. The microtubule can dynamically switch between growing and shrinking phases in this region. Tubulin dimers can bind two molecules of GTP, one of which can be hydrolyzed subsequent to assembly. During polymerization, the tubulin dimers are in the GTP-bound state. The GTP bound to α-tubulin is stable and it plays a structural function in this bound state. However, the GTP bound to β-tubulin may be hydrolyzed to GDP shortly after assembly. The assembly properties of GDP-tubulin are different from those of GTP-tubulin, as GDP-tubulin is more prone to depolymerization. A GDP-bound tubulin subunit at the tip of a microtubule will tend to fall off, although a GDP-bound tubulin in the middle of a microtubule cannot spontaneously pop out of the polymer. Since tubulin adds onto the end of the microtubule in the GTP-bound state, a cap of GTP-bound tubulin is proposed to exist at the tip of the microtubule, protecting it from disassembly. When hydrolysis catches up to the tip of the microtubule, it begins a rapid depolymerization and shrinkage. This switch from growth to shrinking is called a catastrophe. GTP-bound tubulin can begin adding to the tip of the microtubule again, providing a new cap and protecting the microtubule from shrinking. This is referred to as "rescue". "Search and capture" model In 1986, Marc Kirschner and Tim Mitchison proposed that microtubules use their dynamic properties of growth and shrinkage at their plus ends to probe the three dimensional space of the cell. Plus ends that encounter kinetochores or sites of polarity become captured and no longer display growth or shrinkage. In contrast to normal dynamic microtubules, which have a half-life of 5–10 minutes, the captured microtubules can last for hours. This idea is commonly known as the "search and capture" model. Indeed, work since then has largely validated this idea. At the kinetochore, a variety of complexes have been shown to capture microtubule (+)-ends. Moreover, a (+)-end capping activity for interphase microtubules has also been described. This later activity is mediated by formins, the adenomatous polyposis coli protein, and EB1, a protein that tracks along the growing plus ends of microtubules. Regulation of microtubule dynamics Post-translational modifications Although most microtubules have a half-life of 5–10 minutes, certain microtubules can remain stable for hours. These stabilized microtubules accumulate post-translational modifications on their tubulin subunits by the action of microtubule-bound enzymes. However, once the microtubule depolymerizes, most of these modifications are rapidly reversed by soluble enzymes. Since most modification reactions are slow while their reverse reactions are rapid, modified tubulin is only detected on long-lived stable microtubules. Most of these modifications occur on the C-terminal region of alpha-tubulin. This region, which is rich in negatively charged glutamate, forms relatively unstructured tails that project out from the microtubule and form contacts with motors. Thus, it is believed that tubulin modifications regulate the interaction of motors with the microtubule. Since these stable modified microtubules are typically oriented towards the site of cell polarity in interphase cells, this subset of modified microtubules provide a specialized route that helps deliver vesicles to these polarized zones. These modifications include: Detyrosination: the removal of the C-terminal tyrosine from alpha-tubulin. This reaction exposes a glutamate at the new C-terminus. As a result, microtubules that accumulate this modification are often referred to as Glu-microtubules. Although the tubulin carboxypeptidase has yet to be identified, the tubulin—tyrosine ligase (TTL) is known. Delta2: the removal of the last two residues from the C-terminus of alpha-tubulin. Unlike detyrosination, this reaction is thought to be irreversible and has only been documented in neurons. Acetylation: the addition of an acetyl group to lysine 40 of alpha-tubulin. This modification occurs on a lysine that is accessible only from the inside of the microtubule, and it remains unclear how enzymes access the lysine residue. The nature of the tubulin acetyltransferase remains controversial, but it has been found that in mammals the major acetyltransferase is ATAT1. however, the reverse reaction is known to be catalyzed by HDAC6. The real impact of acetylation in the structure and function of microtubules remains elusive. Polyglutamylation: the addition of a glutamate polymer (typically 4-6 residues long) to the gamma-carboxyl group of any one of five glutamates found near the end of alpha-tubulin. Enzymes related to TTL add the initial branching glutamate (TTL4,5 and 7), while other enzymes that belong to the same family lengthen the polyglutamate chain (TTL6,11 and 13). Polyglycylation: the addition of a glycine polymer (2-10 residues long) to the gamma-carboxyl group of any one of five glutamates found near the end of beta-tubulin. TTL3 and 8 add the initial branching glycine, while TTL10 lengthens the polyglycine chain. Tubulin is also known to be phosphorylated, ubiquitinated, sumoylated, and palmitoylated. Tubulin-binding drugs and chemical effects A wide variety of drugs are able to bind to tubulin and modify its assembly properties. These drugs can have an effect at intracellular concentrations much lower than that of tubulin. This interference with microtubule dynamics can have the effect of stopping a cell's cell cycle and can lead to programmed cell death or apoptosis. However, there are data to suggest that interference of microtubule dynamics is insufficient to block the cells undergoing mitosis. These studies have demonstrated that suppression of dynamics occurs at concentrations lower than those needed to block mitosis. Suppression of microtubule dynamics by tubulin mutations or by drug treatment have been shown to inhibit cell migration. Both microtubule stabilizers and destabilizers can suppress microtubule dynamics. The drugs that can alter microtubule dynamics include: The cancer-fighting taxane class of drugs (paclitaxel (taxol) and docetaxel) block dynamic instability by stabilizing GDP-bound tubulin in the microtubule. Thus, even when hydrolysis of GTP reaches the tip of the microtubule, there is no depolymerization and the microtubule does not shrink back. Taxanes (alone or in combination with platinum derivatives (carboplatine) or gemcitabine) are used against breast and gynecological malignancies, squamous-cell carcinomas (head-and-neck cancers, some lung cancers), etc. The epothilones, e.g. Ixabepilone, work in a similar way to the taxanes. Vinorelbine, Nocodazole, vincristine, and colchicine have the opposite effect, blocking the polymerization of tubulin into microtubules. Eribulin binds to the (+) growing end of the microtubules. Eribulin exerts its anticancer effects by triggering apoptosis of cancer cells following prolonged and irreversible mitotic blockade. Expression of β3-tubulin has been reported to alter cellular responses to drug-induced suppression of microtubule dynamics. In general the dynamics are normally suppressed by low, subtoxic concentrations of microtubule drugs that also inhibit cell migration. However, incorporating β3-tubulin into microtubules increases the concentration of drug that is needed to suppress dynamics and inhibit cell migration. Thus, tumors that express β3-tubulin are not only resistant to the cytotoxic effects of microtubule targeted drugs, but also to their ability to suppress tumor metastasis. Moreover, expression of β3-tubulin also counteracts the ability of these drugs to inhibit angiogenesis which is normally another important facet of their action. Microtubule polymers are extremely sensitive to various environmental effects. Very low levels of free calcium can destabilize microtubules and this prevented early researchers from studying the polymer in vitro. Cold temperatures also cause rapid depolymerization of microtubules. In contrast, heavy water promotes microtubule polymer stability. Proteins that interact with microtubules Microtubule-associated proteins (MAPs) MAPs have been shown to play a crucial role in the regulation of microtubule dynamics in-vivo. The rates of microtubule polymerization, depolymerization, and catastrophe vary depending on which microtubule-associated proteins (MAPs) are present. The originally identified MAPs from brain tissue can be classified into two groups based on their molecular weight. This first class comprises MAPs with a molecular weight below 55-62 kDa, and are called τ (tau) proteins. In-vitro, tau proteins have been shown to directly bind microtubules, promote nucleation and prevent disassembly, and to induce the formation of parallel arrays. Additionally, tau proteins have also been shown to stabilize microtubules in axons and have been implicated in Alzheimer's disease. The second class is composed of MAPs with a molecular weight of 200-1000 kDa, of which there are four known types: MAP-1, MAP-2, MAP-3 and MAP-4. MAP-1 proteins consists of a set of three different proteins: A, B and C. The C protein plays an important role in the retrograde transport of vesicles and is also known as cytoplasmic dynein. MAP-2 proteins are located in the dendrites and in the body of neurons, where they bind with other cytoskeletal filaments. The MAP-4 proteins are found in the majority of cells and stabilize microtubules. In addition to MAPs that have a stabilizing effect on microtubule structure, other MAPs can have a destabilizing effect either by cleaving or by inducing depolymerization of microtubules. Three proteins called katanin, spastin, and fidgetin have been observed to regulate the number and length of microtubules via their destabilizing activities. Furthermore, CRACD-like protein is predicted to be localized to the microtubules. MAPs are determinants of different cytoskeletal forms of axons and dendrites, with microtubules being farther apart in the dendrites Plus-end tracking proteins (+TIPs) Plus end tracking proteins are MAP proteins which bind to the tips of growing microtubules and play an important role in regulating microtubule dynamics. For example, +TIPs have been observed to participate in the interactions of microtubules with chromosomes during mitosis. The first MAP to be identified as a +TIP was CLIP170 (cytoplasmic linker protein), which has been shown to play a role in microtubule depolymerization rescue events. Additional examples of +TIPs include EB1, EB2, EB3, p150Glued, Dynamitin, Lis1, CLIP115, CLASP1, and CLASP2. Motor proteins Microtubules can act as substrates for motor proteins that are involved in important cellular functions such as vesicle trafficking and cell division. Unlike other microtubule-associated proteins, motor proteins utilize the energy from ATP hydrolysis to generate mechanical work that moves the protein along the substrate. The major motor proteins that interact with microtubules are kinesin, which usually moves toward the (+) end of the microtubule, and dynein, which moves toward the (−) end. Dynein is composed of two identical heavy chains, which make up two large globular head domains, and a variable number of intermediate and light chains. Dynein-mediated transport takes place from the (+) end towards the (-) end of the microtubule. ATP hydrolysis occurs in the globular head domains, which share similarities with the AAA+ (ATPase associated with various cellular activities) protein family. ATP hydrolysis in these domains is coupled to movement along the microtubule via the microtubule-binding domains. Dynein transports vesicles and organelles throughout the cytoplasm. In order to do this, dynein molecules bind organelle membranes via a protein complex that contains a number of elements including dynactin. Kinesin has a similar structure to dynein. Kinesin is involved in the transport of a variety of intracellular cargoes, including vesicles, organelles, protein complexes, and mRNAs toward the microtubule's (+) end. Some viruses (including retroviruses, herpesviruses, parvoviruses, and adenoviruses) that require access to the nucleus to replicate their genomes attach to motor proteins. Mitosis Centrosomes The centrosome is the main MTOC (microtubule organizing center) of the cell during mitosis. Each centrosome is made up of two cylinders called centrioles, oriented at right angles to each other. The centriole is formed from 9 main microtubules, each having two partial microtubules attached to it. Each centriole is approximately 400 nm long and around 200 nm in circumference. The centrosome is critical to mitosis as most microtubules involved in the process originate from the centrosome. The minus ends of each microtubule begin at the centrosome, while the plus ends radiate out in all directions. Thus the centrosome is also important in maintaining the polarity of microtubules during mitosis. Most cells only have one centrosome for most of their cell cycle, however, right before mitosis, the centrosome duplicates, and the cell contains two centrosomes. Some of the microtubules that radiate from the centrosome grow directly away from the sister centrosome. These microtubules are called astral microtubules. With the help of these astral microtubules the centrosomes move away from each other towards opposite sides of the cell. Once there, other types of microtubules necessary for mitosis, including interpolar microtubules and K-fibers can begin to form. A final important note about the centrosomes and microtubules during mitosis is that while the centrosome is the MTOC for the microtubules necessary for mitosis, research has shown that once the microtubules themselves are formed and in the correct place the centrosomes themselves are not needed for mitosis to occur. Microtubule subclasses Astral microtubules are a subclass of microtubules which only exist during and around mitosis. They originate from the centrosome, but do not interact with the chromosomes, kinetochores, or with the microtubules originating from the other centrosome. Instead their microtubules radiate towards the cell membrane. Once there they interact with specific motor proteins which create force that pull the microtubules, and thus the entire centrosome towards the cell membrane. As stated above, this helps the centrosomes orient themselves away from each other in the cell. However these astral microtubules do not interact with the mitotic spindle itself. Experiments have shown that without these astral microtubules, the mitotic spindle can form, however its orientation in the cell is not always correct and thus mitosis does not occur as effectively. Another key function of the astral microtubules is to aid in cytokinesis. Astral microtubules interact with motor proteins at the cell membrane to pull the spindle and the entire cell apart once the chromosomes have been replicated. Interpolar/Polar microtubules are a class of microtubules which also radiate out from the centrosome during mitosis. These microtubules radiate towards the mitotic spindle, unlike astral microtubules. Interpolar microtubules are both the most abundant and dynamic subclass of microtubules during mitosis. Around 95 percent of microtubules in the mitotic spindle can be characterized as interpolar. Furthermore, the half life of these microtubules is extremely short as it is less than one minute. Interpolar microtubules that do not attach to the kinetochores can aid in chromosome congregation through lateral interaction with the kinetochores. K fibers/Kinetochore microtubules are the third important subclass of mitotic microtubules. These microtubules form direct connections with the kinetochores in the mitotic spindle. Each K fiber is composed of 20–40 parallel microtubules, forming a strong tube which is attached at one end to the centrosome and on the other to the kinetochore, located in the center of each chromosome. Since each centrosome has a K fiber connecting to each pair of chromosomes, the chromosomes become tethered in the middle of the mitotic spindle by the K fibers. K fibers have a much longer half life than interpolar microtubules, at between 4 and 8 minutes. During the end of mitoses, the microtubules forming each K fiber begin to disassociate, thus shorting the K fibers. As the K fibers shorten the pair chromosomes are pulled apart right before cytokinesis. Previously, some researchers believed that K fibers form at their minus end originating from the centrosome just like other microtubules, however, new research has pointed to a different mechanism. In this new mechanism, the K fibers are initially stabilized at their plus end by the kinetochores and grow out from there. The minus end of these K fibers eventually connect to an existing Interpolar microtubule and are eventually connected to the centrosome in this way. Microtubule nuclear in the mitotic spindle Most of the microtubules that form the mitotic spindle originate from the centrosome. Originally it was thought that all of these microtubules originated from the centrosome via a method called search and capture, described in more detail in a section above, however new research has shown that there are addition means of microtubule nucleation during mitosis. One of the most important of these additional means of microtubule nucleation is the RAN-GTP pathway. RAN-GTP associates with chromatin during mitosis to create a gradient that allows for local nucleation of microtubules near the chromosomes. Furthermore, a second pathway known as the augmin/HAUS complex (some organisms use the more studied augmin complex, while others such as humans use an analogous complex called HAUS) acts an additional means of microtubule nucleation in the mitotic spindle. Functions Cell migration Microtubule plus ends are often localized to particular structures. In polarized interphase cells, microtubules are disproportionately oriented from the MTOC toward the site of polarity, such as the leading edge of migrating fibroblasts. This configuration is thought to help deliver microtubule-bound vesicles from the Golgi to the site of polarity. Dynamic instability of microtubules is also required for the migration of most mammalian cells that crawl. Dynamic microtubules regulate the levels of key G-proteins such as RhoA and Rac1, which regulate cell contractility and cell spreading. Dynamic microtubules are also required to trigger focal adhesion disassembly, which is necessary for migration. It has been found that microtubules act as "struts" that counteract the contractile forces that are needed for trailing edge retraction during cell movement. When microtubules in the trailing edge of cell are dynamic, they are able to remodel to allow retraction. When dynamics are suppressed, microtubules cannot remodel and, therefore, oppose the contractile forces. The morphology of cells with suppressed microtubule dynamics indicate that cells can extend the front edge (polarized in the direction of movement), but have difficulty retracting their trailing edge. On the other hand, high drug concentrations, or microtubule mutations that depolymerize the microtubules, can restore cell migration but there is a loss of directionality. It can be concluded that microtubules act both to restrain cell movement and to establish directionality. Cilia and flagella Microtubules have a major structural role in eukaryotic cilia and flagella. Cilia and flagella always extend directly from a MTOC, in this case termed the basal body. The action of the dynein motor proteins on the various microtubule strands that run along a cilium or flagellum allows the organelle to bend and generate force for swimming, moving extracellular material, and other roles. Prokaryotes possess tubulin-like proteins including FtsZ. However, prokaryotic flagella are entirely different in structure from eukaryotic flagella and do not contain microtubule-based structures. Development The cytoskeleton formed by microtubules is essential to the morphogenetic process of an organism's development. For example, a network of polarized microtubules is required within the oocyte of Drosophila melanogaster during its embryogenesis in order to establish the axis of the egg. Signals sent between the follicular cells and the oocyte (such as factors similar to epidermal growth factor) cause the reorganization of the microtubules so that their (-) ends are located in the lower part of the oocyte, polarizing the structure and leading to the appearance of an anterior-posterior axis. This involvement in the body's architecture is also seen in mammals. Another area where microtubules are essential is the development of the nervous system in higher vertebrates, where tubulin's dynamics and those of the associated proteins (such as the microtubule-associated proteins) are finely controlled during the development of the nervous system. Gene regulation The cellular cytoskeleton is a dynamic system that functions on many different levels: In addition to giving the cell a particular form and supporting the transport of vesicles and organelles, it can also influence gene expression. The signal transduction mechanisms involved in this communication are little understood. However, the relationship between the drug-mediated depolymerization of microtubules, and the specific expression of transcription factors has been described, which has provided information on the differential expression of the genes depending on the presence of these factors. This communication between the cytoskeleton and the regulation of the cellular response is also related to the action of growth factors: for example, this relation exists for connective tissue growth factor.
Biology and health sciences
Cell parts
Biology
20627
https://en.wikipedia.org/wiki/Micrometre
Micrometre
The micrometre (Commonwealth English as used by the International Bureau of Weights and Measures; SI symbol: μm) or micrometer (American English), also commonly known by the non-SI term micron, is a unit of length in the International System of Units (SI) equalling (SI standard prefix "micro-" = ); that is, one millionth of a metre (or one thousandth of a millimetre, , or about ). The nearest smaller common SI unit is the nanometre, equivalent to one thousandth of a micrometre, one millionth of a millimetre or one billionth of a metre (). The micrometre is a common unit of measurement for wavelengths of infrared radiation as well as sizes of biological cells and bacteria, and for grading wool by the diameter of the fibres. The width of a single human hair ranges from approximately 20 to . Examples Between 1 μm and 10 μm: 1–10 μm – length of a typical bacterium 3–8 μm – width of strand of spider web silk 5 μm – length of a typical human spermatozoon's head 10 μm – size of fungal hyphae about 10 μm – size of a fog, mist, or cloud water droplet Between 10 μm and 100 μm: about 10–12 μm – thickness of plastic wrap (cling wrap) 10 to 55 μm – width of wool fibre 17 to 181 μm – diameter of human hair 70 to 180 μm – thickness of paper SI standardization The term micron and the symbol μ were officially accepted for use in isolation to denote the micrometre in 1879, but officially revoked by the International System of Units (SI) in 1967. This became necessary because the older usage was incompatible with the official adoption of the unit prefix micro-, denoted μ, during the creation of the SI in 1960. In the SI, the systematic name micrometre became the official name of the unit, and μm became the official unit symbol. In American English, the use of "micron" helps differentiate the unit from the micrometer, a measuring device, because the unit's name in mainstream American spelling is a homograph of the device's name. In spoken English, they may be distinguished by pronunciation, as the name of the measuring device is often stressed on the second syllable ( ), whereas the systematic pronunciation of the unit name, in accordance with the convention for pronouncing SI units in English, places the stress on the first syllable ( ). The plural of micron is normally microns, though micra was occasionally used before 1950. Symbol The official symbol for the SI prefix micro- is a Greek lowercase mu. Unicode has inherited from ISO/IEC 8859-1, distinct from the code point . According to the Unicode Consortium, the Greek letter character is preferred, but implementations must recognize the micro sign as well for compatibility with legacy character sets. Most fonts use the same glyph for the two characters. Before desktop publishing became commonplace, it was customary to render the symbol μ in texts produced with mechanical typewriters by combining a slightly lowered slash with the letter . For example, "15 μm" would appear as "". This gave rise in early word processing to substituting just the letter for the symbol if the Greek letter μ was not available, as in "". The Unicode CJK Compatibility block contains square forms of some Japanese katakana measure and currency units. corresponds to .
Physical sciences
Metric
Basics and measurement
20640
https://en.wikipedia.org/wiki/MacOS
MacOS
macOS, originally Mac OS X, previously shortened as OS X, is a Unix-based operating system developed and marketed by Apple since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop and laptop computers, it is the second most widely used desktop OS, after Microsoft Windows and ahead of all Linux distributions, including ChromeOS and SteamOS. , the most recent release of macOS is macOS 15 Sequoia, the 21st major version of macOS. Mac OS X succeeded classic Mac OS, the primary Macintosh operating system from 1984 to 2001. Its underlying architecture came from NeXT's NeXTSTEP, as a result of Apple's acquisition of NeXT, which also brought Steve Jobs back to Apple. The first desktop version, Mac OS X 10.0, was released on March 24, 2001. Mac OS X Leopard and all later versions of macOS, other than OS X Lion, are UNIX 03 certified. The derivatives of macOS are Apple's other operating systems: iOS, iPadOS, watchOS, tvOS, and audioOS. macOS has supported three major processor architectures: originally PowerPC-based Macs in 1999; Intel Core-based Macs from 2006; and self-designed 64-bit Arm Apple M series Macs since 2020. A prominent part of macOS's original brand identity was the use of Roman numeral X, pronounced "ten", as well as code naming each release after species of big cats, and later, places within California. Apple shortened the name to "OS X" in 2011 and then changed it to "macOS" in 2016 to align with the branding of Apple's other operating systems. After 16 distinct versions of macOS 10, macOS Big Sur was presented as version 11 in 2020, and every subsequent version has also incremented the major version number, similarly to classic Mac OS and iOS, but is still named after places within California. History Development The heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, before being launched in 1989. The kernel of NeXTSTEP is based upon the Mach kernel, which was originally developed at Carnegie Mellon University, with additional kernel layers and low-level user space code derived from parts of FreeBSD and other BSD operating systems. Its graphical user interface was built on top of an object-oriented GUI toolkit using the programming language. Throughout the 1990s, Apple had tried to create a "next-generation" OS to succeed its classic Mac OS through the Taligent, Copland and Gershwin projects, but all were eventually abandoned. This led Apple to acquire NeXT in 1997, allowing NeXTSTEP, later called OPENSTEP, to serve as the basis for Apple's next generation operating system. This purchase also led to Steve Jobs returning to Apple as an interim, and then the permanent CEO, shepherding the transformation of the programmer-friendly OPENSTEP into a system that would be adopted by Apple's primary market of home users and creative professionals. The project was first codenamed "Rhapsody" before officially being named Mac OS X. Mac OS X The letter "X" in Mac OS X's name refers to the number 10, a Roman numeral, and Apple has stated that it should be pronounced "ten" in this context. However, it is also commonly pronounced like the letter "X". The iPhone X, iPhone XR and iPhone XS all later followed this convention. Previous Macintosh operating systems (versions of the classic Mac OS) were named using Arabic numerals, as with Mac OS 8 and Mac OS 9. Until macOS 11 Big Sur, all versions of the operating system were given version numbers of the form 10.x, with this going from 10.0 up until 10.15; starting with macOS 11 Big Sur, Apple switched to numbering major releases with numbers that increase by 1 with every major release. The first version of Mac OS X, Mac OS X Server 1.0, was a transitional product, featuring an interface resembling the classic Mac OS, though it was not compatible with software designed for the older system. Consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API; many could also be run directly through the Classic Environment with a reduction in performance. The consumer version of Mac OS X was launched in 2001 with Mac OS X 10.0. Reviews were variable, with extensive praise for its sophisticated, glossy Aqua interface, but criticizing it for sluggish performance. With Apple's popularity at a low, the maker of FrameMaker, Adobe Inc., declined to develop new versions of it for Mac OS X. Ars Technica columnist John Siracusa, who reviewed every major OS X release up to 10.10, described the early releases in retrospect as "dog-slow, feature poor" and Aqua as "unbearably slow and a huge resource hog". Apple rapidly developed several new releases of Mac OS X. Siracusa's review of version 10.3, Panther, noted "It's strange to have gone from years of uncertainty and vaporware to a steady annual supply of major new operating system releases." Version 10.4, Tiger, reportedly shocked executives at Microsoft by offering a number of features, such as fast file searching and improved graphics processing, that Microsoft had spent several years struggling to add to Windows Vista with acceptable performance. As the operating system evolved, it moved away from the classic Mac OS, with applications being added and removed. Considering music to be a key market, Apple developed the iPod music player and music software for the Mac, including iTunes and GarageBand. Targeting the consumer and media markets, Apple emphasized its new "digital lifestyle" applications such as the iLife suite, integrated home entertainment through the Front Row media center and the Safari web browser. With the increasing popularity of the internet, Apple offered additional online services, including the .Mac, MobileMe and most recently iCloud products. It later began selling third-party applications through the Mac App Store. Newer versions of Mac OS X also included modifications to the general interface, moving away from the striped gloss and transparency of the initial versions. Some applications began to use a brushed metal appearance, or non-pinstriped title bar appearance in version 10.4. In Leopard, Apple announced a unification of the interface, with a standardized gray-gradient window style. In 2006, the first Intel Macs were released with a specialized version of Mac OS X 10.4 Tiger. A key development for the system was the announcement and release of the iPhone from 2007 onwards. While Apple's previous iPod media players used a minimal operating system, the iPhone used an operating system based on Mac OS X, which would later be called "iPhone OS" and then iOS. The simultaneous release of two operating systems based on the same frameworks placed tension on Apple, which cited the iPhone as forcing it to delay Mac OS X 10.5 Leopard. However, after Apple opened the iPhone to third-party developers its commercial success drew attention to Mac OS X, with many iPhone software developers showing interest in Mac development. In 2007, Mac OS X 10.5 Leopard was the sole release with universal binary components, allowing installation on both Intel Macs and select PowerPC Macs. It is also the final release with PowerPC Mac support. Mac OS X 10.6 Snow Leopard was the first version of Mac OS X to be built exclusively for Intel Macs, and the final release with 32-bit Intel Mac support. The name was intended to signal its status as an iteration of Leopard, focusing on technical and performance improvements rather than user-facing features; indeed it was explicitly branded to developers as being a 'no new features' release. Since its release, several OS X or macOS releases (namely OS X Mountain Lion, OS X El Capitan, macOS High Sierra, and macOS Monterey) follow this pattern, with a name derived from its predecessor, similar to the 'tick–tock model' used by Intel. In two succeeding versions, Lion and Mountain Lion, Apple moved some applications to a highly skeuomorphic style of design inspired by contemporary versions of iOS while simplifying some elements by making controls such as scroll bars fade out when not in use. This direction was, like brushed metal interfaces, unpopular with some users, although it continued a trend of greater animation and variety in the interface previously seen in design aspects such as the Time Machine backup utility, which presented past file versions against a swirling nebula, and the glossy translucent dock of Leopard and Snow Leopard. In addition, with Mac OS X 10.7 Lion, Apple ceased to release separate server versions of Mac OS X, selling server tools as a separate downloadable application through the Mac App Store. A review described the trend in the server products as becoming "cheaper and simpler... shifting its focus from large businesses to small ones." OS X In 2012, with the release of OS X 10.8 Mountain Lion, the name of the system was officially shortened from Mac OS X to OS X, after the previous version shortened the system name in a similar fashion a year prior. That year, Apple removed the head of OS X development, Scott Forstall, and design was changed towards a more minimal direction. Apple's new user interface design, using deep color saturation, text-only buttons and a minimal, 'flat' interface, was debuted with iOS 7 in 2013. With OS X engineers reportedly working on iOS 7, the version released in 2013, OS X 10.9 Mavericks, was something of a transitional release, with some of the skeuomorphic design removed, while most of the general interface of Mavericks remained unchanged. The next version, OS X 10.10 Yosemite, adopted a design similar to iOS 7 but with greater complexity suitable for an interface controlled with a mouse. From 2012 onwards, the system has shifted to an annual release schedule similar to that of iOS and Mac OS X releases prior to 10.4 Tiger. It also steadily cut the cost of updates from Snow Leopard onwards, before removing upgrade fees altogether in OS X Mavericks. Some journalists and third-party software developers have suggested that this decision, while allowing more rapid feature release, meant less opportunity to focus on stability, with no version of OS X recommendable for users requiring stability and performance above new features. Apple's 2015 update, OS X 10.11 El Capitan, was announced to focus specifically on stability and performance improvements. macOS In 2016, with the release of macOS 10.12 Sierra, the name was changed from OS X to macOS with the purpose of aligning it with the branding of Apple's other primary operating systems: iOS, watchOS, and tvOS. macOS Sierra added Siri, iCloud Drive, picture-in-picture support, a Night Shift mode that switches the display to warmer colors at night, and two Continuity features: Universal Clipboard, which syncs a user's clipboard across their Apple devices, and Auto Unlock, which can unlock a user's Mac with their Apple Watch. macOS Sierra also adds support for the Apple File System (APFS), Apple's successor to the dated HFS+ file system. macOS 10.13 High Sierra, released in 2017, included performance improvements, Metal 2 and HEVC support, and made APFS the default file system for SSD boot drives. Its successor, macOS 10.14 Mojave, was released in 2018, adding a dark mode option and a dynamic wallpaper setting. It was succeeded by macOS 10.15 Catalina in 2019, which replaces iTunes with separate apps for different types of media, and introduces the Catalyst system for porting iOS apps. In 2020, Apple announced macOS 11 Big Sur at that year's WWDC. This was the first increment in the primary version number of macOS since the release of Mac OS X Public Beta in 2000; updates to macOS 11 were given 11.x numbers, matching the version numbering scheme used by Apple's other operating systems. Big Sur brought major changes to the user interface and was the first version to run on Apple Silicon, based on the ARM architecture. The numbering system started with Big Sur continued in 2021 with macOS 12 Monterey, 2022 with macOS 13 Ventura, 2023 with macOS 14 Sonoma, and 2024 with macOS 15 Sequoia. Timeline of releases Architecture At macOS's core is a POSIX-compliant operating system built on top of the XNU kernel, (which incorporated large parts of FreeBSD kernel) and FreeBSD userland for the standard Unix facilities available from the command line interface. Apple has released this family of software as a free and open source operating system named Darwin. On top of Darwin, Apple layered a number of components, including the Aqua interface and the Finder, to complete the GUI-based operating system which is macOS. With its original introduction as Mac OS X, the system brought a number of new capabilities to provide a more stable and reliable platform than its predecessor, the classic Mac OS. For example, pre-emptive multitasking and memory protection improved the system's ability to run multiple applications simultaneously without them interrupting or corrupting each other. Many aspects of macOS's architecture are derived from OPENSTEP, which was designed to be portable, to ease the transition from one platform to another. For example, NeXTSTEP was ported from the original 68k-based NeXT workstations to x86 and other architectures before NeXT was purchased by Apple, and OPENSTEP was later ported to the PowerPC architecture as part of the Rhapsody project. Prior to macOS High Sierra, and on drives other than solid state drives (SSDs), the default file system is HFS+, which it inherited from the classic Mac OS. Operating system designer Linus Torvalds had criticized HFS+, saying it is "probably the worst file system ever", whose design is "actively corrupting user data". He criticized the case insensitivity of file names, a design made worse when Apple extended the file system to support Unicode. The Darwin subsystem in macOS manages the file system, which includes the Unix permissions layer. In 2003 and 2005, two Macworld editors expressed criticism of the permission scheme; Ted Landau called misconfigured permissions "the most common frustration" in macOS, while Rob Griffiths suggested that some users may even have to reset permissions every day, a process which can take up to 15 minutes. More recently, another Macworld editor, Dan Frakes, called the procedure of repairing permissions vastly overused. He argues that macOS typically handles permissions properly without user interference, and resetting permissions should only be tried when problems emerge. The architecture of macOS incorporates a layered design: the layered frameworks aid rapid development of applications by providing existing code for common tasks. Apple provides its own software development tools, most prominently an integrated development environment called Xcode. Xcode provides interfaces to compilers that support several programming languages including C, C++, Objective-C, and Swift. For the Mac transition to Intel processors, it was modified so that developers could build their applications as a universal binary, which provides compatibility with both the Intel-based and PowerPC-based Macintosh lines. First and third-party applications can be controlled programmatically using the AppleScript framework, retained from the classic Mac OS, or using the newer Automator application that offers pre-written tasks that do not require programming knowledge. Software compatibility Apple offered two main APIs to develop software natively for macOS: Cocoa and Carbon. Cocoa was a descendant of APIs inherited from OPENSTEP with no ancestry from the classic Mac OS, while Carbon was an adaptation of classic Mac OS APIs, allowing Mac software to be minimally rewritten to run natively on Mac OS X. The Cocoa API was created as the result of a 1993 collaboration between NeXT Computer and Sun Microsystems. This heritage is highly visible for Cocoa developers, since the "NS" prefix is ubiquitous in the framework, standing variously for NeXTSTEP or NeXT/Sun. The official OPENSTEP API, published in September 1994, was the first to split the API between Foundation and ApplicationKit and the first to use the "NS" prefix. Traditionally, Cocoa programs have been mostly written in Objective-C, with Java as an alternative. However, on July 11, 2005, Apple announced that "features added to Cocoa in Mac OS X versions later than 10.4 will not be added to the Cocoa-Java programming interface." macOS also used to support the Java Platform as a "preferred software package"—in practice this means that applications written in Java fit as neatly into the operating system as possible while still being cross-platform compatible, and that graphical user interfaces written in Swing look almost exactly like native Cocoa interfaces. Since 2014, Apple has promoted its new programming language Swift as the preferred language for software development on Apple platforms. Apple's original plan with macOS was to require all developers to rewrite their software into the Cocoa APIs. This caused much outcry among existing Mac developers, who threatened to abandon the platform rather than invest in a costly rewrite, and the idea was shelved. To permit a smooth transition from Mac OS 9 to Mac OS X, the Carbon Application Programming Interface (API) was created. Applications written with Carbon were initially able to run natively on both classic Mac OS and Mac OS X, although this ability was later dropped as Mac OS X developed. Carbon was not included in the first product sold as Mac OS X: the little-used original release of Mac OS X Server 1.0, which also did not include the Aqua interface. Apple limited further development of Carbon from the release of Leopard onwards and announced that Carbon applications would not run at 64-bit. A number of macOS applications continued to use Carbon for some time afterwards, especially ones with heritage dating back to the classic Mac OS and for which updates would be difficult, uneconomic or not necessary. This included Microsoft Office up to Office 2016, and Photoshop up to CS5. Early versions of macOS could also run some classic Mac OS applications through the Classic Environment with performance limitations; this feature was removed from 10.5 onwards and all Macs using Intel processors. Because macOS is POSIX compliant, many software packages written for the other Unix-like systems including Linux can be recompiled to run on it, including many scientific and technical programs. Third-party projects such as Homebrew, Fink, MacPorts and pkgsrc provide pre-compiled or pre-formatted packages. Apple and others have provided versions of the X Window System graphical interface which can allow these applications to run with an approximation of the macOS look-and-feel. The current Apple-endorsed method is the open-source XQuartz project; earlier versions could use the X11 application provided by Apple, or before that the XDarwin project. Applications can be distributed to Macs and installed by the user from any source and by any method such as downloading (with or without code signing, available via an Apple developer account) or through the Mac App Store, a marketplace of software maintained by Apple through a process requiring the company's approval. Apps installed through the Mac App Store run within a sandbox, restricting their ability to exchange information with other applications or modify the core operating system and its features. This has been cited as an advantage, by allowing users to install apps with confidence that they should not be able to damage their system, but also as a disadvantage due to blocking the Mac App Store's use for professional applications that require elevated privileges. Applications without any code signature cannot be run by default except from a computer's administrator account. Apple produces macOS applications. Some are included with macOS and some sold separately. This includes iWork, Final Cut Pro, Logic Pro, iLife, and the database application FileMaker. Numerous other developers also offer software for macOS. In 2018, Apple introduced an application layer, codenamed Marzipan, to port iOS apps to macOS. macOS Mojave included ports of four first-party iOS apps including Home and News, and it was announced that the API would be available for third-party developers to use from 2019. With macOS Catalina in 2019, the application layer was made available to third-party developers as Mac Catalyst. Hardware compatibility List of macOS versions, the supported systems on which they run, and their RAM requirements Tools such as XPostFacto and patches applied to the installation media have been developed by third parties to enable installation of newer versions of macOS on systems not officially supported by Apple. This includes a number of pre-G3 Power Macintosh systems that can be made to run up to and including Mac OS X 10.2 Jaguar, all G3-based Macs which can run up to and including Tiger, and sub-867 MHz G4 Macs can run Leopard by removing the restriction from the installation DVD or entering a command in the Mac's Open Firmware interface to tell the Leopard Installer that it has a clock rate of 867 MHz or greater. Except for features requiring specific hardware such as graphics acceleration or DVD writing, the operating system offers the same functionality on all supported hardware. As most Mac hardware components, or components similar to those, since the Intel transition are available for purchase, some technology-capable groups have developed software to install macOS on non-Apple computers. These are referred to as Hackintoshes, a portmanteau of the words "hack" and "Macintosh". This violates Apple's EULA (and is therefore unsupported by Apple technical support, warranties etc.), but communities that cater to personal users, who do not install for resale and profit, have generally been ignored by Apple. These self-made computers allow more flexibility and customization of hardware, but at a cost of leaving the user more responsible for their own machine, such as on matter of data integrity or security. Psystar, a business that attempted to profit from selling macOS on non-Apple certified hardware, was sued by Apple in 2008. PowerPC–Intel transition In April 2002, eWeek announced a rumor that Apple had a version of Mac OS X code-named Marklar, which ran on Intel x86 processors. The idea behind Marklar was to keep Mac OS X running on an alternative platform should Apple become dissatisfied with the progress of the PowerPC platform. These rumors subsided until late in May 2005, when various media outlets, such as The Wall Street Journal and CNET, announced that Apple would unveil Marklar in the coming months. On June 6, 2005, Steve Jobs announced in his keynote address at WWDC that Apple would be making the transition from PowerPC to Intel processors over the following two years, and that Mac OS X would support both platforms during the transition. Jobs also confirmed rumors that Apple had versions of Mac OS X running on Intel processors for most of its developmental life. Intel-based Macs would run a new recompiled version of OS X along with Rosetta, a binary translation layer which enables software compiled for PowerPC Mac OS X to run on Intel Mac OS X machines. The system was included with Mac OS X versions up to version 10.6.8. Apple dropped support for Classic mode on the new Intel Macs. Third party emulation software such as Mini vMac, Basilisk II and SheepShaver provided support for some early versions of Mac OS. A new version of Xcode and the underlying command-line compilers supported building universal binaries that would run on either architecture. PowerPC-only software is supported with Apple's official binary translation software, Rosetta, though applications eventually had to be rewritten to run properly on the newer versions released for Intel processors. Apple initially encouraged developers to produce universal binaries with support for both PowerPC and Intel. PowerPC binaries suffer a performance penalty when run on Intel Macs through Rosetta. Moreover, some PowerPC software, such as kernel extensions and System Preferences plugins, are not supported on Intel Macs at all. Plugins for Safari need to be compiled for the same platform as Safari, so when Safari is running on Intel, it requires plug-ins that have been compiled as Intel-only or universal binaries, so PowerPC-only plug-ins will not work. While Intel Macs can run PowerPC, Intel, and universal binaries, PowerPC Macs support only universal and PowerPC builds. Support for the PowerPC platform was dropped following the transition. In 2009, Apple announced at WWDC that Mac OS X 10.6 Snow Leopard would drop support for PowerPC processors and be Intel-only. Rosetta continued to be offered as an optional download or installation choice in Snow Leopard before it was discontinued with Mac OS X 10.7 Lion. In addition, new versions of Mac OS X first- and third-party software increasingly required Intel processors, including new versions of iLife, iWork, Aperture and Logic Pro. Intel–Apple silicon transition Rumors of Apple shifting Macs from Intel to in-house ARM processors used by iOS devices began circulating as early as 2011, and ebbed and flowed throughout the 2010s. Rumors intensified in 2020, when numerous reports announced that the company would announce its shift to its custom processors at WWDC. Apple officially announced its shift to processors designed in-house on June 22, 2020, at WWDC 2020, with the transition planned to last for approximately two years. The first release of macOS to support ARM was macOS Big Sur. Big Sur and later versions support Universal 2 binaries, which are applications consisting of both Intel (x86-64) and Apple silicon (AArch64) binaries; when launched, only the appropriate binary is run. Additionally, Intel binaries can be run on Apple silicon-based Macs using the Rosetta 2 binary translation software. The transition was completed at WWDC 2023 with the announce of the Apple silicon Mac Pro, ending the transition in 3 years, slightly behind schedule. The change in processor architecture allows Macs with ARM processors to be able to run iOS and iPadOS apps natively. Features Aqua user interface One of the major differences between the classic Mac OS and the current macOS was the addition of Aqua, a graphical user interface with water-like elements, in the first major release of Mac OS X. Every window element, text, graphic, or widget is drawn on-screen using spatial anti-aliasing technology. ColorSync, a technology introduced many years before, was improved and built into the core drawing engine, to provide color matching for printing and multimedia professionals. Also, drop shadows were added around windows and isolated text elements to provide a sense of depth. New interface elements were integrated, including sheets (dialog boxes attached to specific windows) and drawers, which would slide out and provide options. The use of soft edges, translucent colors, and pinstripes, similar to the hardware design of the first iMacs, brought more texture and color to the user interface when compared to what Mac OS 9 and Mac OS X Server 1.0's "Platinum" appearance had offered. According to Siracusa, the introduction of Aqua and its departure from the then conventional look "hit like a ton of bricks." Bruce Tognazzini (who founded the original Apple Human Interface Group) said that the Aqua interface in Mac OS X 10.0 represented a step backwards in usability compared with the original Mac OS interface. Third-party developers started producing skins for customizable applications and other operating systems which mimicked the Aqua appearance. To some extent, Apple has used the successful transition to this new design as leverage, at various times threatening legal action against people who make or distribute software with an interface the company says is derived from its copyrighted design. Apple has continued to change aspects of the macOS appearance and design, particularly with tweaks to the appearance of windows and the menu bar. Since 2012, Apple has sold almost all of its Mac models with high-resolution Retina displays, and macOS and its APIs have extensive support for resolution-independent development on supporting high-resolution displays. Reviewers have described Apple's support for the technology as superior to that on Windows. The human interface guidelines published by Apple for macOS are followed by many applications, giving them consistent user interface and keyboard shortcuts. In addition, new services for applications are included, which include spelling and grammar checkers, special characters palette, color picker, font chooser and dictionary; these global features are present in every Cocoa application, adding consistency. The graphics system OpenGL composites windows onto the screen to allow hardware-accelerated drawing. This technology, introduced in version 10.2, is called Quartz Extreme, a component of Quartz. Quartz's internal imaging model correlates well with the Portable Document Format (PDF) imaging model, making it easy to output PDF to multiple devices. As a side result, PDF viewing and creating PDF documents from any application are built-in features. Reflecting its popularity with design users, macOS also has system support for a variety of professional video and image formats and includes an extensive pre-installed font library, featuring many prominent brand-name designs. Built-in components The Finder is a file browser allowing quick access to all areas of the computer, which has been modified throughout subsequent releases of macOS. Quick Look has been part of the Finder since version 10.5. It allows for dynamic previews of files, including videos and multi-page documents without opening any other applications. Spotlight, a file searching technology which has been integrated into the Finder since version 10.4, allows rapid real-time searches of data files; mail messages; photos; and other information based on item properties (metadata) or content. macOS makes use of a Dock, which holds file and folder shortcuts as well as minimized windows. Apple added Exposé in version 10.3 (called Mission Control since version 10.7), a feature which includes three functions to help accessibility between windows and desktop. Its functions are to instantly reveal all open windows as thumbnails for easy navigation to different tasks, display all open windows as thumbnails from the current application, and hide all windows to access the desktop. FileVault is optional encryption of the user's files with the 128-bit Advanced Encryption Standard (AES-128). Features introduced in version 10.4 include Automator, an application designed to create an automatic workflow for different tasks; Dashboard, a full-screen group of small applications called desktop widgets that can be called up and dismissed in one keystroke; and Front Row, a media viewer interface accessed by the Apple Remote. Sync Services allows applications to access a centralized extensible database for various elements of user data, including calendar and contact items. The operating system then managed conflicting edits and data consistency. All system icons are scalable up to 512×512 pixels as of version 10.5 to accommodate various places where they appear in larger size, including for example the Cover Flow view, a three-dimensional graphical user interface included with iTunes, the Finder, and other Apple products for visually skimming through files and digital media libraries via cover artwork. That version also introduced Spaces, a virtual desktop implementation which enables the user to have more than one desktop and display them in an Exposé-like interface; an automatic backup technology called Time Machine, which allows users to view and restore previous versions of files and application data; and Screen Sharing was built in for the first time. In more recent releases, Apple has developed support for emoji characters by including the proprietary Apple Color Emoji font. Apple has also connected macOS with social networks such as Twitter and Facebook through the addition of share buttons for content such as pictures and text. Apple has brought several applications and features that originally debuted in iOS, its mobile operating system, to macOS in recent releases, notably the intelligent personal assistant Siri, which was introduced in version 10.12 of macOS. Multilingual support There are 47 system languages available in macOS for the user at the moment of installation; the system language is used throughout the entire operating system environment. Input methods for typing in dozens of scripts can be chosen independently of the system language. Recent updates have added increased support for Chinese characters and interconnections with popular social networks in China. Updating methods macOS can be updated using the Software Update settings pane in System Settings or the softwareupdate command line utility. Until OS X 10.8 Mountain Lion, a separate Software Update application performed this functionality. In Mountain Lion and later, this was merged into the Mac App Store application, although the underlying update mechanism remains unchanged and is fundamentally different from the download mechanism used when purchasing an App Store application. In macOS 10.14 Mojave, the updating function was moved again to the Software Update settings pane. Most Macs receive six or seven years of macOS updates. After a new major release of macOS, the previous two releases still receive occasional updates, but many security vulnerabilities are only patched in the latest macOS release. Release history Timeline of versions Mac OS X versions were named after big cats, with the exception of Mac OS X Server 1.0 and the original public beta, from Mac OS X 10.0 until OS X 10.9 Mavericks, when Apple switched to using California locations. Prior to its release, version 10.0 was code named internally at Apple as "Cheetah", and Mac OS X 10.1 was code named internally as "Puma". After the immense buzz surrounding Mac OS X 10.2, codenamed "Jaguar", Apple's product marketing began openly using the code names to promote the operating system. Mac OS X 10.3 was marketed as "Panther", Mac OS X 10.4 as "Tiger", Mac OS X 10.5 as "Leopard", Mac OS X 10.6 as "Snow Leopard", Mac OS X 10.7 as "Lion", OS X 10.8 as "Mountain Lion", and OS X 10.9 as "Mavericks". "Panther", "Tiger" and "Leopard" are registered as trademarks of Apple, but "Cheetah", "Puma" and "Jaguar" have never been registered. Apple has also registered "Lynx" and "Cougar" as trademarks, though these were allowed to lapse. Computer retailer Tiger Direct sued Apple for its use of the name "Tiger". On May 16, 2005, a US federal court in the Southern District of Florida ruled that Apple's use did not infringe on Tiger Direct's trademark. Mac OS X Public Beta On September 13, 2000, Apple released a US$29.95 "preview" version of Mac OS X, internally codenamed Kodiak, to gain feedback from users. The "PB", as it was known, marked the first public availability of the Aqua interface and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in Spring 2001. Mac OS X 10.0 On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah). The initial version was slow, incomplete, and had very few applications available at launch, mostly from independent developers. While many critics suggested that the operating system was not ready for mainstream adoption, they recognized the importance of its initial launch as a base on which to improve. Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment, for attempts to overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks. Mac OS X 10.1 Later that year, on September 25, 2001, Mac OS X 10.1 (internally codenamed Puma) was released. It featured increased performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users, in addition to the $129 boxed version for people running Mac OS 9. It was discovered that the upgrade CDs were full install CDs that could be used with Mac OS 9 systems by removing a specific file; Apple later re-released the CDs in an actual stripped-down format that did not facilitate installation on such systems. On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month. Mac OS X 10.2 Jaguar On August 23, 2002, Apple followed up with Mac OS X 10.2 Jaguar, the first release to use its code name as part of the branding. It brought significant performance improvements, and an updated version of Aqua's visual design. Jaguar also included over 150 new user-facing features, including Quartz Extreme for compositing graphics directly on an ATI Radeon or Nvidia GeForce2 MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the new Address Book, and the iChat instant messaging client. The Happy Mac icon — which had appeared during the Mac OS startup sequence since the original Macintosh — was replaced with a grey Apple logo. Mac OS X 10.3 Panther Mac OS X v10.3 Panther was released on October 24, 2003. It significantly improved performance and incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface, Fast user switching, Exposé (Window manager), FileVault, Safari, iChat AV (which added video conferencing features to iChat), improved Portable Document Format (PDF) rendering and much greater Microsoft Windows interoperability. Support for some early G3 computers such as "beige" Power Macs and "WallStreet" PowerBooks was discontinued. Mac OS X 10.4 Tiger Mac OS X 10.4 Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features. As with Panther, certain older machines were no longer supported; Tiger requires a Mac with 256 MB and a built-in FireWire port. Among the new features, Tiger introduced Spotlight, Dashboard, Smart Folders, updated Mail program with Smart Mailboxes, QuickTime 7, Safari 2, Automator, VoiceOver, Core Image and Core Video. The initial release of the Apple TV used a modified version of Tiger with a different graphical interface and fewer applications and services. On January 10, 2006, Apple released the first Intel-based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC-based Macs and the new Intel-based machines, with the exception of the Intel release lacking support for the Classic environment. Mac OS X 10.5 Leopard Mac OS X 10.5 Leopard was released on October 26, 2007. It was called by Apple "the largest update of Mac OS X". It brought more than 300 new features. Leopard supports both PowerPC- and Intel x86-based Macintosh computers; support for the G3 processor was dropped and the G4 processor required a minimum clock rate of 867 MHz, and at least 512 MB of RAM to be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder, Time Machine, Spaces, Boot Camp pre-installed, full support for 64-bit applications (including graphical applications), new features in Mail and iChat, and a number of new security features. Leopard is an Open Brand UNIX 03 registered product on the Intel platform. It was also the first BSD-based OS to receive UNIX 03 certification. Leopard dropped support for the Classic Environment and all Classic applications. It was the final version of Mac OS X to support the PowerPC architecture. Mac OS X 10.6 Snow Leopard Mac OS X 10.6 Snow Leopard was released on August 28, 2009. Rather than delivering big changes to the appearance and end user functionality like the previous releases of , Snow Leopard focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes were: the disk space that the operating system frees up after a clean install compared to Mac OS X 10.5 Leopard, a more responsive Finder rewritten in Cocoa, faster Time Machine backups, more reliable and user-friendly disk ejects, a more powerful version of the Preview application, as well as a faster Safari web browser. Snow Leopard only supported machines with Intel CPUs, required at least 1 GB of RAM, and dropped default support for applications built for the PowerPC architecture (Rosetta could be installed as an additional component to retain support for PowerPC-only applications). Snow Leopard also featured new 64-bit technology capable of supporting greater amounts of RAM, improved support for multi-core processors through Grand Central Dispatch, and advanced GPU performance with OpenCL. The 10.6.6 update introduced support for the Mac App Store, Apple's digital distribution platform for macOS applications. OS X 10.7 Lion OS X 10.7 Lion was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications called Launchpad and a greater use of multi-touch gestures, to the Mac. This release removed Rosetta, making it incompatible with PowerPC applications. Changes made to the GUI include auto-hiding scrollbars that only appear when they are used, and Mission Control which unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface. Apple also made changes to applications: they resume in the same state as they were before they were closed, similar to iOS. Documents auto-save by default. OS X 10.8 Mountain Lion OS X 10.8 Mountain Lion was released on July 25, 2012. Following the release of Lion the previous year, it was the first of the annual rather than two-yearly updates to OS X (and later macOS), which also closely aligned with the annual iOS operating system updates. It incorporates some features seen in iOS 5, which include Game Center, support for iMessage in the new Messages messaging application, and Reminders as a to-do list app separate from iCal (which is renamed as Calendar, like the iOS app). It also includes support for storing iWork documents in iCloud. Notification Center, which makes its debut in Mountain Lion, is a desktop version similar to the one in iOS 5.0 and higher. Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features including support for Baidu as an option for Safari search engine, QQ, 163.com and 126.com services for Mail, Contacts and Calendar, Youku, Tudou and Sina Weibo are integrated into share sheets. Starting with Mountain Lion, Apple software updates (including the OS) are distributed via the App Store. This updating mechanism replaced the Apple Software Update utility. OS X 10.9 Mavericks OS X 10.9 Mavericks was released on October 22, 2013. It was a free upgrade to all users running Snow Leopard or later with a 64-bit Intel processor. Its changes include the addition of the previously iOS-only Maps and iBooks applications, improvements to the Notification Center, enhancements to several applications, and many under-the-hood improvements. OS X 10.10 Yosemite OS X 10.10 Yosemite was released on October 16, 2014. It features a redesigned user interface similar to that of iOS 7, intended to feature a more minimal, text-based 'flat' design, with use of translucency effects and intensely saturated colors. Apple's showcase new feature in Yosemite is Handoff, which enables users with iPhones running iOS 8.1 or later to answer phone calls, receive and send SMS messages, and complete unfinished iPhone emails on their Mac. As of OS X 10.10.3, Photos replaced iPhoto and Aperture. OS X 10.11 El Capitan OS X 10.11 El Capitan was released on September 30, 2015. Similar to Mac OS X 10.6 Snow Leopard, Apple described this release as emphasizing "refinements to the Mac experience" and "improvements to system performance". Refinements include public transport built into the Maps application, GUI improvements to the
Technology
Operating Systems
null
20648
https://en.wikipedia.org/wiki/Melting
Melting
Melting, or fusion, is a physical process that results in the phase transition of a substance from a solid to a liquid. This occurs when the internal energy of the solid increases, typically by the application of heat or pressure, which increases the substance's temperature to the melting point. At the melting point, the ordering of ions or molecules in the solid breaks down to a less ordered state, and the solid melts to become a liquid. Substances in the molten state generally have reduced viscosity as the temperature increases. An exception to this principle is elemental sulfur, whose viscosity increases in the range of 130 °C to 190 °C due to polymerization. Some organic compounds melt through mesophases, states of partial order between solid and liquid. First order phase transition From a thermodynamics point of view, at the melting point the change in Gibbs free energy ∆G of the substances is zero, but there are non-zero changes in the enthalpy (H) and the entropy (S), known respectively as the enthalpy of fusion (or latent heat of fusion) and the entropy of fusion. Melting is therefore classified as a first-order phase transition. Melting occurs when the Gibbs free energy of the liquid becomes lower than the solid for that material. The temperature at which this occurs is dependent on the ambient pressure. Low-temperature helium is the only known exception to the general rule. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.8 K. This means that, at appropriate constant pressures, heat must be removed from these substances in order to melt them. Criteria Among the theoretical criteria for melting, the Lindemann and Born criteria are those most frequently used as a basis to analyse the melting conditions. The Lindemann criterion states that melting occurs because of "vibrational instability", e.g. crystals melt; when the average amplitude of thermal vibrations of atoms is relatively high compared with interatomic distances, e.g. <δu2>1/2 > δLRs, where δu is the atomic displacement, the Lindemann parameter δL ≈ 0.20...0.25 and Rs is one-half of the inter-atomic distance. The "Lindemann melting criterion" is supported by experimental data both for crystalline materials and for glass-liquid transitions in amorphous materials. The Born criterion is based on a rigidity catastrophe caused by the vanishing elastic shear modulus, i.e. when the crystal no longer has sufficient rigidity to mechanically withstand the load, it becomes liquid. Supercooling Under a standard set of conditions, the melting point of a substance is a characteristic property. The melting point is often equal to the freezing point. However, under carefully created conditions, supercooling, or superheating past the melting or freezing point can occur. Water on a very clean glass surface will often supercool several degrees below the freezing point without freezing. Fine emulsions of pure water have been cooled to −38 °C without nucleation to form ice. Nucleation occurs due to fluctuations in the properties of the material. If the material is kept still there is often nothing (such as physical vibration) to trigger this change, and supercooling (or superheating) may occur. Thermodynamically, the supercooled liquid is in the metastable state with respect to the crystalline phase, and it is likely to crystallize suddenly. Glasses Glasses are amorphous solids, which are usually fabricated when the molten material cools very rapidly to below its glass transition temperature, without sufficient time for a regular crystal lattice to form. Solids are characterised by a high degree of connectivity between their molecules, and fluids have lower connectivity of their structural blocks. Melting of a solid material can also be considered as a percolation via broken connections between particles e.g. connecting bonds. In this approach melting of an amorphous material occurs, when the broken bonds form a percolation cluster with Tg dependent on quasi-equilibrium thermodynamic parameters of bonds e.g. on enthalpy (Hd) and entropy (Sd) of formation of bonds in a given system at given conditions: where fc is the percolation threshold and R is the universal gas constant. Although Hd and Sd are not true equilibrium thermodynamic parameters and can depend on the cooling rate of a melt, they can be found from available experimental data on viscosity of amorphous materials. Even below its melting point, quasi-liquid films can be observed on crystalline surfaces. The thickness of the film is temperature-dependent. This effect is common for all crystalline materials. This pre-melting shows its effects in e.g. frost heave, the growth of snowflakes, and, taking grain boundary interfaces into account, maybe even in the movement of glaciers. Related concept In ultrashort pulse physics, a so-called nonthermal melting may take place. It occurs not because of the increase of the atomic kinetic energy, but because of changes of the interatomic potential due to excitation of electrons. Since electrons are acting like a glue sticking atoms together, heating electrons by a femtosecond laser alters the properties of this "glue", which may break the bonds between the atoms and melt a material even without an increase of the atomic temperature. In genetics, melting DNA means to separate the double-stranded DNA into two single strands by heating or the use of chemical agents, polymerase chain reaction. Table
Physical sciences
Phase transitions
null
20650
https://en.wikipedia.org/wiki/Macroevolution
Macroevolution
Macroevolution comprises the evolutionary processes and patterns which occur at and above the species level. In contrast, microevolution is evolution occurring within the population(s) of a single species. In other words, microevolution is the scale of evolution that is limited to intraspecific (within-species) variation, while macroevolution extends to interspecific (between-species) variation. The evolution of new species (speciation) is an example of macroevolution. This is the common definition for 'macroevolution' used by contemporary scientists. Although, the exact usage of the term has varied throughout history. Macroevolution addresses the evolution of species and higher taxonomic groups (genera, families, orders, etc) and uses evidence from phylogenetics, the fossil record, and molecular biology to answer how different taxonomic groups exhibit different species diversity and/or morphological disparity. Origin and changing meaning of the term After Charles Darwin published his book On the Origin of Species in 1859, evolution was widely accepted to be real phenomenon. However, many scientists still disagreed with Darwin that natural selection was the primary mechanism to explain evolution. Prior to the modern synthesis, during the period between the 1880s to the 1930s (dubbed the ‘Eclipse of Darwinism’) many scientists argued in favor of alternative explanations. These included ‘orthogenesis’, and among its proponents was the Russian entomologist Yuri A. Filipchenko. Filipchenko appears to have been the one who coined the term ‘macroevolution’ in his book Variabilität und Variation (1927). While introducing the concept, he claimed that the field of genetics is insufficient to explain “the origin of higher systematic units” above the species level. Regarding the origin of higher systematic units, Filipchenko stated his claim that ‘like-produces-like’. A taxon must originate from other taxa of equivalent rank. A new species must come from an old species, a genus from an older genus, a family from another family, etc. Filipchenko believed this was the only way to explain the origin of the major characters that define species and especially higher taxonomic groups (genera, families, orders, etc). For example, the origin of families must require the sudden appearance of new traits which are different in greater magnitude compared to the characters required for the origin of a genus or species. However, this view is no longer consistent with contemporary understanding of evolution. Furthermore, the Linnaean ranks of ‘genus’ (and higher) are not real entities but artificial concepts which break down when they are combined with the process of evolution. Nevertheless, Filipchenko’s distinction between microevolution and macroevolution had a major impact on the development of evolutionary science. The term was adopted by Filipchenko's protégé Theodosius Dobzhansky in his book ‘Genetics und the Origin of Species’ (1937), a seminal piece that contributed to the development of the Modern Synthesis. ‘Macroevolution’ was also adopted by those who used it to criticize the Modern Synthesis. A notable example of this was the book The Material Basis of Evolution (1940) by the geneticist Richard Goldschmidt, a close friend of Filipchenko. Goldschmidt suggested saltational evolutionary changes either due to mutations that affect the rates of developmental processes or due to alterations in the chromosomal pattern. Particularly the latter idea was widely rejected by the modern synthesis, but the hopeful monster concept based on Evolutionary developmental biology (or evo-devo) explanations found a moderate revival in recent times. Occasionally such dramatic changes can lead to novel features that survive. As an alternative to saltational evolution, Dobzhansky suggested that the difference between macroevolution and microevolution reflects essentially a difference in time-scales, and that macroevolutionary changes were simply the sum of microevolutionary changes over geologic time. This view became broadly accepted, and accordingly, the term macroevolution has been used widely as a neutral label for the study of evolutionary changes that take place over a very large time-scale. Further, species selection suggests that selection among species is a major evolutionary factor that is independent from and complementary to selection among organisms. Accordingly, the level of selection has become the conceptual basis of a third definition, which defines macroevolution as evolution through selection among interspecific variation. Microevolution vs Macroevolution The fact that both micro- and macroevolution (including common descent) are supported by overwhelming evidence remains uncontroversial within the scientific community. However, there has been considerable debate over the past 80 years regarding causal and explanatory connection between microevolution and macroevolution. The ‘Extrapolation’ view holds there is no fundamental difference between the two aside from scale; i.e. macroevolution is merely cumulative microevolution. Hence, the patterns observed at the macroevolutionary scale can be explained by microevolutionary processes over long periods of time. The ‘Decoupled’ view holds that microevolutionary processes are decoupled from macroevolutionary processes because there are separate macroevolutionary processes that cannot be sufficiently explained by microevolutionary processes alone. " ... macroevolutionary processes are underlain by microevolutionary phenomena and are compatible with microevolutionary theories, but macroevolutionary studies require the formulation of autonomous hypotheses and models (which must be tested using macroevolutionary evidence). In this (epistemologically) very important sense, macroevolution is decoupled from microevolution: macroevolution is an autonomous field of evolutionary study."                           Francisco J. Ayala (1983) Many scientists see macroevolution as a field of study rather than a distinct process that is similar to the process of microevolution. Thus, macroevolution is concerned with the history of life and macroevolutionary explanations encompasses ecology, paleontology, mass extinctions, plate tectonics, and unique events such as the Cambrian explosion. Within microevolution, the evolutionary process of changing heritable characteristics (e.g. changes in allele frequencies) is described by population genetics, with mechanisms such as mutation, natural selection, and genetic drift. However, the scope of evolution can be expanded to higher scales where different observations are made. Macroevolutionary mechanisms are provided to explain these. For example, speciation can be discussed in terms of the ‘mode’, i.e. how speciation occurs. Different modes of speciation include sympatric and allopatric). Additionally, scientists research the 'tempo' of speciation, i.e. the rate at which species change genetically and/or morphologically. Classically, competing hypothesis for the tempo of specieation include phyletic gradualism and punctuated equilibrium). Lastly, what are the causes of speciation is also extensively researched. More questions can be asked regarding the evolution of species and higher taxonomic groups (genera, families, orders, etc), and how these have evolved across geography and vast spans of geological time. Such questions are researched from various fields of science. This makes the study of 'macroevolution' interdisciplinary. For example: How different species are related to each other via common ancestry. This topic is researched in the field of phylogenetics. The rates of evolutionary change and across time in the fossil record. Why do some groups experience a lot of change while others remain morphologically stable? The latter case are often called 'living fossils'. However, this term is criticized for wrongly implying that such species have not evolved. The term 'stabilomorph' has been proposed instead. The impacts and causes of major events in palaeontological history, including mass extinctions and evolutionary diversifications. Prominent examples of mass extinctions are the Permian-Triassic and Cretaceous-Paleogene events. In contrast, famous evolutionary radiations include the Cambrian Explosion and Cretaceous Terrestrial Revolution. Why different species or high taxonomic groups (even in spite of having similar ages) exhibit different survival/extinction rates, species diversity, and/or morphological disparity. The observation of long-term trends in evolution. Evolutionary trends can be passive (resembling diffusion) or driven (directional). A related question is whether these trends are directed in some way, e.g. towards complexity or simplicity. How the distinctive and of complext traits, which differentiate species and higher taxa from another, have evolved. Examples of this include gene duplication, heterochrony, novelty in evodevo from facilitated variation, and constructive neutral evolution. Macroevolutionary processes Speciation According to the modern definition, the evolutionary transition from the ancestral to the daughter species is microevolutionary, because it results from selection (or, more generally, sorting) among varying organisms. However, speciation has also a macroevolutionary aspect, because it produces the interspecific variation species selection operates on. Another macroevolutionary aspect of speciation is the rate at which it successfully occurs, analogous to reproductive success in microevolution. Speciation is the process in which populations within one species change to an extent at which they become reproductively isolated, that is, they cannot interbreed anymore. However, this classical concept has been challenged and more recently, a phylogenetic or evolutionary species concept has been adopted. Their main criteria for new species is to be diagnosable and monophyletic, that is, they form a clearly defined lineage. Charles Darwin first discovered that speciation can be extrapolated so that species not only evolve into new species, but also into new genera, families and other groups of animals. In other words, macroevolution is reducible to microevolution through selection of traits over long periods of time. In addition, some scholars have argued that selection at the species level is important as well. The advent of genome sequencing enabled the discovery of gradual genetic changes both during speciation but also across higher taxa. For instance, the evolution of humans from ancestral primates or other mammals can be traced to numerous but individual mutations. Evolution of new organs and tissues One of the main questions in evolutionary biology is how new structures evolve, such as new organs. Macroevolution is often thought to require the evolution of structures that are 'completely new'. However, fundamentally novel structures are not necessary for dramatic evolutionary change. As can be seen in vertebrate evolution, most "new" organs are actually not new—they are simply modifications of previously existing organs. For instance, the evolution of mammal diversity in the past 100 million years has not required any major innovation. All of this diversity can be explained by modification of existing organs, such as the evolution of elephant tusks from incisors. Other examples include wings (modified limbs), feathers (modified reptile scales), lungs (modified swim bladders, e.g. found in fish), or even the heart (a muscularized segment of a vein). The same concept applies to the evolution of "novel" tissues. Even fundamental tissues such as bone can evolve from combining existing proteins (collagen) with calcium phosphate (specifically, hydroxy-apatite). This probably happened when certain cells that make collagen also accumulated calcium phosphate to get a proto-bone cell. Molecular macroevolution Microevolution is facilitated by mutations, the vast majority of which have no or very small effects on gene or protein function. For instance, the activity of an enzyme may be slightly changed or the stability of a protein slightly altered. However, occasionally mutations can dramatically change the structure and functions of protein. This may be called "molecular macroevolution". Protein function. There are countless cases in which protein function is dramatically altered by mutations. For instance, a mutation in acetaldehyde dehydrogenase (EC:1.2.1.10) can change it to a 4-hydroxy-2-oxopentanoate pyruvate lyase (EC:4.1.3.39), i.e., a mutation that changes an enzyme from one to another EC class (there are only 7 main classes of enzymes). Another example is the conversion of a yeast galactokinase (Gal1) to a transcription factor (Gal3) which can be achieved by an insertion of only two amino acids. While some mutations may not change the molecular function of a protein significantly, their biological function may be dramatically changed. For instance, most brain receptors recognize specific neurotransmitters, but that specificity can easily be changed by mutations. This has been shown by acetylcholine receptors that can be changed to serotonin or glycine receptors which actually have very different functions. Their similar gene structure also indicates that they must have arisen from gene duplications. Protein structure. Although protein structures are highly conserved, sometimes one or a few mutations can dramatically change a protein. For instance, an IgG-binding, 4+ fold can be transformed into an albumin-binding, 3-α fold via a single amino-acid mutation. This example also shows that such a transition can happen with neither function nor native structure being completely lost. In other words, even when multiple mutations are required to convert one protein or structure into another, the structure and function is at least partially retained in the intermediary sequences. Similarly, domains can be converted into other domains (and thus other functions). For instance, the structures of SH3 folds can evolve into OB folds which in turn can evolve into CLB folds. Examples Evolutionary faunas A macroevolutionary benchmark study is Sepkoski's work on marine animal diversity through the Phanerozoic. His iconic diagram of the numbers of marine families from the Cambrian to the Recent illustrates the successive expansion and dwindling of three "evolutionary faunas" that were characterized by differences in origination rates and carrying capacities. Long-term ecological changes and major geological events are postulated to have played crucial roles in shaping these evolutionary faunas. Stanley's rule Macroevolution is driven by differences between species in origination and extinction rates. Remarkably, these two factors are generally positively correlated: taxa that have typically high diversification rates also have high extinction rates. This observation has been described first by Steven Stanley, who attributed it to a variety of ecological factors. Yet, a positive correlation of origination and extinction rates is also a prediction of the Red Queen hypothesis, which postulates that evolutionary progress (increase in fitness) of any given species causes a decrease in fitness of other species, ultimately driving to extinction those species that do not adapt rapidly enough. High rates of origination must therefore correlate with high rates of extinction. Stanley's rule, which applies to almost all taxa and geologic ages, is therefore an indication for a dominant role of biotic interactions in macroevolution. "Macromutations": Single mutations leading to dramatic change While the vast majority of mutations are inconsequential, some can have a dramatic effect on morphology or other features of an organism. One of the best studied cases of a single mutation that leads to massive structural change is the Ultrabithorax mutation in fruit flies. The mutation duplicates the wings of a fly to make it look like a dragonfly, a different order of insect. Evolution of multicellularity The evolution of multicellular organisms is one of the major breakthroughs in evolution. The first step of converting a unicellular organism into a metazoan (a multicellular organism) is to allow cells to attach to each other. This can be achieved by one or a few mutations. In fact, many bacteria form multicellular assemblies, e.g. cyanobacteria or myxobacteria. Another species of bacteria, Jeongeupia sacculi, form well-ordered sheets of cells, which ultimately develop into a bulbous structure. Similarly, unicellular yeast cells can become multicellular by a single mutation in the ACE2 gene, which causes the cells to form a branched multicellular form. Evolution of bat wings The wings of bats have the same structural elements (bones) as any other five-fingered mammal (see periodicity in limb development). However, the finger bones in bats are dramatically elongated, so the question is how these bones became so long. It has been shown that certain growth factors such as bone morphogenetic proteins (specifically Bmp2) is over expressed so that it stimulates an elongation of certain bones. Genetic changes in the bat genome identified the changes that lead to this phenotype and it has been recapitulated in mice: when specific bat DNA is inserted in the mouse genome, recapitulating these mutations, the bones of mice grow longer. Limb loss in lizards and snakes Snakes evolved from lizards. Phylogenetic analysis shows that snakes are actually nested within the phylogenetic tree of lizards, demonstrating that they have a common ancestor. This split happened about 180 million years ago and several intermediary fossils are known to document the origin. In fact, limbs have been lost in numerous clades of reptiles, and there are cases of recent limb loss. For instance, the skink genus Lerista has lost limbs in multiple cases, with all possible intermediary steps, that is, there are species which have fully developed limbs, shorter limbs with 5, 4, 3, 2, 1 or no toes at all. Human evolution While human evolution from their primate ancestors did not require massive morphological changes, our brain has sufficiently changed to allow human consciousness and intelligence. While the latter involves relatively minor morphological changes it did result in dramatic changes to brain function. Thus, macroevolution does not have to be morphological, it can also be functional. Evolution of viviparity in lizards Most lizards are egg-laying and thus need an environment that is warm enough to incubate their eggs. However, some species have evolved viviparity, that is, they give birth to live young, as almost all mammals do. In several clades of lizards, egg-laying (oviparous) species have evolved into live-bearing ones, apparently with very little genetic change. For instance, a European common lizard, Zootoca vivipara, is viviparous throughout most of its range, but oviparous in the extreme southwest portion. That is, within a single species, a radical change in reproductive behavior has happened. Similar cases are known from South American lizards of the genus Liolaemus which have egg-laying species at lower altitudes, but closely related viviparous species at higher altitudes, suggesting that the switch from oviparous to viviparous reproduction does not require many genetic changes. Behavior: Activity pattern in mice Most animals are either active at night or during the day. However, some species switched their activity pattern from day to night or vice versa. For instance, the African striped mouse (Rhabdomys pumilio), transitioned from the ancestrally nocturnal behavior of its close relatives to a diurnal one. Genome sequencing and transcriptomics revealed that this transition was achieved by modifying genes in the rod phototransduction pathway, among others. Research topics Subjects studied within macroevolution include: Adaptive radiations such as the Cambrian Explosion. Changes in biodiversity through time. Evo-devo (the connection between evolution and developmental biology) Genome evolution, like horizontal gene transfer, genome fusions in endosymbioses, and adaptive changes in genome size. Mass extinctions. Estimating diversification rates, including rates of speciation and extinction. The debate between punctuated equilibrium and gradualism. The role of development in shaping evolution, particularly such topics as heterochrony and phenotypic plasticity.
Biology and health sciences
Basics_4
Biology
20663
https://en.wikipedia.org/wiki/Modified%20Mercalli%20intensity%20scale
Modified Mercalli intensity scale
The Modified Mercalli intensity scale (MM, MMI, or MCS) measures the effects of an earthquake at a given location. This is in contrast with the seismic magnitude usually reported for an earthquake. Magnitude scales measure the inherent force or strength of an earthquake – an event occurring at greater or lesser depth. (The "" scale is widely used.) The MM scale measures intensity of shaking, at any particular location, on the surface. It was developed from Giuseppe Mercalli's Mercalli intensity scale of 1902. While shaking experienced at the surface is caused by the seismic energy released by an earthquake, earthquakes differ in how much of their energy is radiated as seismic waves. They also differ in the depth at which they occur; deeper earthquakes have less interaction with the surface, their energy is spread throughout a larger volume, and the energy reaching the surface is spread across a larger area. Shaking intensity is localized. It generally diminishes with distance from the earthquake's epicenter, but it can be amplified in sedimentary basins and in certain kinds of unconsolidated soils. Intensity scales categorize intensity empirically, based on the effects reported by untrained observers, and are adapted for the effects that might be observed in a particular region. By not requiring instrumental measurements, they are useful for estimating the magnitude and location of historical (preinstrumental) earthquakes: the greatest intensities generally correspond to the epicentral area, and their degree and extent (possibly augmented by knowledge of local geological conditions) can be compared with other local earthquakes to estimate the magnitude. History Italian volcanologist Giuseppe Mercalli formulated his first intensity scale in 1883. It had six degrees or categories, has been described as "merely an adaptation" of the then-standard Rossi–Forel scale of 10 degrees, and is now "more or less forgotten". Mercalli's second scale, published in 1902, was also an adaptation of the Rossi–Forel scale, retaining the 10 degrees and expanding the descriptions of each degree. This version "found favour with the users", and was adopted by the Italian Central Office of Meteorology and Geodynamics. In 1904, Adolfo Cancani proposed adding two additional degrees for very strong earthquakes, "catastrophe" and "enormous catastrophe", thus creating a 12-degree scale. His descriptions being deficient, August Heinrich Sieberg augmented them during 1912 and 1923, and indicated a peak ground acceleration for each degree. This became known as the "Mercalli–Cancani scale, formulated by Sieberg", or the "Mercalli–Cancani–Sieberg scale", or simply "MCS", and was used extensively in Europe and remains in use in Italy by the National Institute of Geophysics and Volcanology (INGV). When Harry O. Wood and Frank Neumann translated this into English in 1931 (along with modification and condensation of the descriptions, and removal of the acceleration criteria), they named it the "modified Mercalli intensity scale of 1931" (MM31). Some seismologists refer to this version the "Wood–Neumann scale". Wood and Neumann also had an abridged version, with fewer criteria for assessing the degree of intensity. The Wood–Neumann scale was revised in 1956 by Charles Francis Richter and published in his influential textbook Elementary Seismology. Not wanting to have this intensity scale confused with the Richter scale he had developed, he proposed calling it the "modified Mercalli scale of 1956" (MM56). In their 1993 compendium of historical seismicity in the United States, Carl Stover and Jerry Coffman ignored Richter's revision, and assigned intensities according to their slightly modified interpretation of Wood and Neumann's 1931 scale, effectively creating a new, but largely undocumented version of the scale. The basis by which the United States Geological Survey (and other agencies) assigns intensities is nominally Wood and Neumann's MM31. However, this is generally interpreted with the modifications summarized by Stover and Coffman because in the decades since 1931, "some criteria are more reliable than others as indicators of the level of ground shaking". Also, construction codes and methods have evolved, making much of built environment stronger; these make a given intensity of ground shaking seem weaker. Also, some of the original criteria of the most intense degrees (X and above), such as bent rails, ground fissures, landslides, etc., are "related less to the level of ground shaking than to the presence of ground conditions susceptible to spectacular failure". The categories "catastrophe" and "enormous catastrophe" added by Cancani (XI and XII) are used so infrequently that current USGS practice is to merge them into a single category "Extreme" abbreviated as "X+". Scale values The lesser degrees of the MMI scale generally describe the manner in which the earthquake is felt by people. The greater numbers of the scale are based on observed structural damage. This table gives MMIs that are typically observed at locations near the epicenter of the earthquake. Correlation with magnitude Magnitude and intensity, while related, are very different concepts. Magnitude is a function of the energy liberated by an earthquake, while intensity is the degree of shaking experienced at a point on the surface, and varies from some maximum intensity at or near the epicenter, out to zero at distance. It depends upon many factors, including the depth of the hypocenter, terrain, distance from the epicenter, whether the underlying strata there amplify surface shaking, and any directionality due to the earthquake mechanism. For example, a magnitude 7.0 quake in Salta, Argentina, in 2011, that was 576.8 km deep, had a maximum felt intensity of V, while a magnitude 2.2 event in Barrow in Furness, England, in 1865, about 1 km deep, had a maximum felt intensity of VIII. The small table is a rough guide to the degrees of the MMI scale. The colors and descriptive names shown here differ from those used on certain shake maps in other articles. Estimating site intensity and its use in seismic hazard assessment Dozens of intensity-prediction equations have been published to estimate the macroseismic intensity at a location given the magnitude, source-to-site distance, and perhaps other parameters (e.g. local site conditions). These are similar to ground motion-prediction equations for the estimation of instrumental strong-motion parameters such as peak ground acceleration. A summary of intensity prediction equations is available. Such equations can be used to estimate the seismic hazard in terms of macroseismic intensity, which has the advantage of being related more closely to seismic risk than instrumental strong-motion parameters. Correlation with physical quantities The MMI scale is not defined in terms of more rigorous, objectively quantifiable measurements such as shake amplitude, shake frequency, peak velocity, or peak acceleration. Human-perceived shaking and building damage are best correlated with peak acceleration for lower-intensity events, and with peak velocity for higher-intensity events. Comparison to the moment magnitude scale The effects of any one earthquake can vary greatly from place to place, so many MMI values may be measured for the same earthquake. These values can be displayed best using a contoured map of equal intensity, known as an isoseismal map. However, each earthquake has only one magnitude.
Physical sciences
Seismology
Earth science
20683
https://en.wikipedia.org/wiki/Machine%20code
Machine code
In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binary representation of a computer program which is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions (possibly interspersed with data). Each machine code instruction causes the CPU to perform a specific task. Examples of such tasks include: Load a word from memory to a CPU register Execute an arithmetic logic unit (ALU) operation on one or more registers or memory locations Jump or skip to an instruction that is not the next one In general, each architecture family (e.g., x86, ARM) has its own instruction set architecture (ISA), and hence its own specific machine code language. There are exceptions, such as the VAX architecture, which includes optional support of the PDP-11 instruction set; the IA-64 architecture, which includes optional support of the IA-32 instruction set; and the PowerPC 615 microprocessor, which can natively process both PowerPC and x86 instruction sets. Machine code is a strictly numerical language, and it is the lowest-level interface to the CPU intended for a programmer. Assembly language provides a direct map between the numerical machine code and a human-readable mnemonic. In assembly, numerical opcodes and operands are replaced with mnemonics and labels. For example, the x86 architecture has available the 0x90 opcode; it is represented as NOP in the assembly source code. While it is possible to write programs directly in machine code, managing individual bits and calculating numerical addresses is tedious and error-prone. Therefore, programs are rarely written directly in machine code. However, an existing machine code program may be edited if the assembly source code is not available. The majority of programs today are written in a high-level language. A high-level program may be translated into machine code by a compiler. Instruction set Every processor or processor family has its own instruction set. Instructions are patterns of bits, digits, or characters that correspond to machine commands. Thus, the instruction set is specific to a class of processors using (mostly) the same architecture. Successor or derivative processor designs often include instructions of a predecessor and may add new additional instructions. Occasionally, a successor design will discontinue or alter the meaning of some instruction code (typically because it is needed for new purposes), affecting code compatibility to some extent; even compatible processors may show slightly different behavior for some instructions, but this is rarely a problem. Systems may also differ in other details, such as memory arrangement, operating systems, or peripheral devices. Because a program normally relies on such factors, different systems will typically not run the same machine code, even when the same type of processor is used. A processor's instruction set may have fixed-length or variable-length instructions. How the patterns are organized varies with the particular architecture and type of instruction. Most instructions have one or more opcode fields that specify the basic instruction type (such as arithmetic, logical, jump, etc.), the operation (such as add or compare), and other fields that may give the type of the operand(s), the addressing mode(s), the addressing offset(s) or index, or the operand value itself (such constant operands contained in an instruction are called immediate). Not all machines or individual instructions have explicit operands. On a machine with a single accumulator, the accumulator is implicitly both the left operand and result of most arithmetic instructions. Some other architectures, such as the x86 architecture, have accumulator versions of common instructions, with the accumulator regarded as one of the general registers by longer instructions. A stack machine has most or all of its operands on an implicit stack. Special purpose instructions also often lack explicit operands; for example, CPUID in the x86 architecture writes values into four implicit destination registers. This distinction between explicit and implicit operands is important in code generators, especially in the register allocation and live range tracking parts. A good code optimizer can track implicit and explicit operands which may allow more frequent constant propagation, constant folding of registers (a register assigned the result of a constant expression freed up by replacing it by that constant) and other code enhancements. Assembly languages A much more human-friendly rendition of machine language, named assembly language, uses mnemonic codes to refer to machine code instructions, rather than using the instructions' numeric values directly, and uses symbolic names to refer to storage locations and sometimes registers. For example, on the Zilog Z80 processor, the machine code 00000101, which causes the CPU to decrement the B general-purpose register, would be represented in assembly language as DEC B. Examples IBM 709x The IBM 704, 709, 704x and 709x store one instruction in each instruction word; IBM numbers the bit from the left as S, 1, ..., 35. Most instructions have one of two formats: Generic S,1-11 12-13 Flag, ignored in some instructions 14-17 unused 18-20 Tag 21-35 Y Index register control, other than TSX S,1-2 Opcode 3-17 Decrement 18-20 Tag 21-35 Y For all but the IBM 7094 and 7094 II, there are three index registers designated A, B and C; indexing with multiple 1 bits in the tag subtracts the logical or of the selected index registers and loading with multiple 1 bits in the tag loads all of the selected index registers. The 7094 and 7094 II have seven index registers, but when they are powered on they are in multiple tag mode, in which they use only the three of the index registers in a fashion compatible with earlier machines, and require a Leave Multiple Tag Mode (LMTM) instruction in order to access the other four index registers. The effective address is normally Y-C(T), where C(T) is either 0 for a tag of 0, the logical or of the selected index regisrs in multiple tag mode or the selected index register if not in multiple tag mode. However, the effective address for index register control instructions is just Y. A flag with both bits 1 selects indirect addressing; the indirect address word has both a tag and a Y field. In addition to transfer (branch) instructions, these machines have skip instruction that conditionally skip one or two words, e.g., Compare Accumulator with Storage (CAS) does a three way compare and conditionally skips to NSI, NSI+1 or NSI+2, depending on the result. MIPS The MIPS architecture provides a specific example for a machine code whose instructions are always 32 bits long. The general type of instruction is given by the op (operation) field, the highest 6 bits. J-type (jump) and I-type (immediate) instructions are fully specified by op. R-type (register) instructions include an additional field funct to determine the exact operation. The fields used in these types are: 6 5 5 5 5 6 bits [ op | rs | rt | rd |shamt| funct] R-type [ op | rs | rt | address/immediate] I-type [ op | target address ] J-type rs, rt, and rd indicate register operands; shamt gives a shift amount; and the address or immediate fields contain an operand directly. For example, adding the registers 1 and 2 and placing the result in register 6 is encoded: [ op | rs | rt | rd |shamt| funct] 0 1 2 6 0 32 decimal 000000 00001 00010 00110 00000 100000 binary Load a value into register 8, taken from the memory cell 68 cells after the location listed in register 3: [ op | rs | rt | address/immediate] 35 3 8 68 decimal 100011 00011 01000 00000 00001 000100 binary Jumping to the address 1024: [ op | target address ] 2 1024 decimal 000010 00000 00000 00000 10000 000000 binary Overlapping instructions On processor architectures with variable-length instruction sets (such as Intel's x86 processor family) it is, within the limits of the control-flow resynchronizing phenomenon known as the Kruskal count, sometimes possible through opcode-level programming to deliberately arrange the resulting code so that two code paths share a common fragment of opcode sequences. These are called overlapping instructions, overlapping opcodes, overlapping code, overlapped code, instruction scission, or jump into the middle of an instruction. In the 1970s and 1980s, overlapping instructions were sometimes used to preserve memory space. One example were in the implementation of error tables in Microsoft's Altair BASIC, where interleaved instructions mutually shared their instruction bytes. The technique is rarely used today, but might still be necessary to resort to in areas where extreme optimization for size is necessary on byte-level such as in the implementation of boot loaders which have to fit into boot sectors. It is also sometimes used as a code obfuscation technique as a measure against disassembly and tampering. The principle is also used in shared code sequences of fat binaries which must run on multiple instruction-set-incompatible processor platforms. This property is also used to find unintended instructions called gadgets in existing code repositories and is used in return-oriented programming as alternative to code injection for exploits such as return-to-libc attacks. Relationship to microcode In some computers, the machine code of the architecture is implemented by an even more fundamental underlying layer called microcode, providing a common machine language interface across a line or family of different models of computer with widely different underlying dataflows. This is done to facilitate porting of machine language programs between different models. An example of this use is the IBM System/360 family of computers and their successors. Relationship to bytecode Machine code is generally different from bytecode (also known as p-code), which is either executed by an interpreter or itself compiled into machine code for faster (direct) execution. An exception is when a processor is designed to use a particular bytecode directly as its machine code, such as is the case with Java processors. Machine code and assembly code are sometimes called native code when referring to platform-dependent parts of language features or libraries. Storing in memory From the point of view of the CPU, machine code is stored in RAM, but is typically also kept in a set of caches for performance reasons. There may be different caches for instructions and data, depending on the architecture. The CPU knows what machine code to execute, based on its internal program counter. The program counter points to a memory address and is changed based on special instructions which may cause programmatic branches. The program counter is typically set to a hard coded value when the CPU is first powered on, and will hence execute whatever machine code happens to be at this address. Similarly, the program counter can be set to execute whatever machine code is at some arbitrary address, even if this is not valid machine code. This will typically trigger an architecture specific protection fault. The CPU is oftentimes told, by page permissions in a paging based system, if the current page actually holds machine code by an execute bit — pages have multiple such permission bits (readable, writable, etc.) for various housekeeping functionality. E.g. on Unix-like systems memory pages can be toggled to be executable with the system call, and on Windows, can be used to achieve a similar result. If an attempt is made to execute machine code on a non-executable page, an architecture specific fault will typically occur. Treating data as machine code, or finding new ways to use existing machine code, by various techniques, is the basis of some security vulnerabilities. Similarly, in a segment based system, segment descriptors can indicate whether a segment can contain executable code and in what rings that code can run. From the point of view of a process, the code space is the part of its address space where the code in execution is stored. In multitasking systems this comprises the program's code segment and usually shared libraries. In multi-threading environment, different threads of one process share code space along with data space, which reduces the overhead of context switching considerably as compared to process switching. Readability by humans Various tools and methods exist to decode machine code back to its corresponding source code. Machine code can easily be decoded back to its corresponding assembly language source code because assembly language forms a one-to-one mapping to machine code. The assembly language decoding method is called disassembly. Machine code may be decoded back to its corresponding high-level language under two conditions: The first condition is to accept an obfuscated reading of the source code. An obfuscated version of source code is displayed if the machine code is sent to a decompiler of the source language. The second condition requires the machine code to have information about the source code encoded within. The information includes a symbol table that contains debug symbols. The symbol table may be stored within the executable, or it may exist in separate files. A debugger can then read the symbol table to help the programmer interactively debug the machine code in execution. The SHARE Operating System (1959) for the IBM 709, IBM 7090, and IBM 7094 computers allowed for an loadable code format named SQUOZE. SQUOZE was a compressed binary form of assembly language code and included a symbol table. Modern IBM mainframe operating systems, such as z/OS, have available a symbol table named Associated data (ADATA). The table is stored in a file that can be produced by the IBM High-Level Assembler (HLASM), IBM's COBOL compiler, and IBM's PL/I compiler. Microsoft Windows has available a symbol table that is stored in a program database (.pdb) file. Most Unix-like operating systems have available symbol table formats named stabs and DWARF. In macOS and other Darwin-based operating systems, the debug symbols are stored in DWARF format in a separate .dSYM file.
Technology
Programming languages
null
20696
https://en.wikipedia.org/wiki/Class%20%28set%20theory%29
Class (set theory)
In set theory and its applications throughout mathematics, a class is a collection of sets (or sometimes other mathematical objects) that can be unambiguously defined by a property that all its members share. Classes act as a way to have set-like collections while differing from sets so as to avoid paradoxes, especially Russell's paradox (see ). The precise definition of "class" depends on foundational context. In work on Zermelo–Fraenkel set theory, the notion of class is informal, whereas other set theories, such as von Neumann–Bernays–Gödel set theory, axiomatize the notion of "proper class", e.g., as entities that are not members of another entity. A class that is not a set (informally in Zermelo–Fraenkel) is called a proper class, and a class that is a set is sometimes called a small class. For instance, the class of all ordinal numbers, and the class of all sets, are proper classes in many formal systems. In Quine's set-theoretical writing, the phrase "ultimate class" is often used instead of the phrase "proper class" emphasising that in the systems he considers, certain classes cannot be members, and are thus the final term in any membership chain to which they belong. Outside set theory, the word "class" is sometimes used synonymously with "set". This usage dates from a historical period where classes and sets were not distinguished as they are in modern set-theoretic terminology. Many discussions of "classes" in the 19th century and earlier are really referring to sets, or rather perhaps take place without considering that certain classes can fail to be sets. Examples The collection of all algebraic structures of a given type will usually be a proper class. Examples include the class of all groups, the class of all vector spaces, and many others. In category theory, a category whose collection of objects forms a proper class (or whose collection of morphisms forms a proper class) is called a large category. The surreal numbers are a proper class of objects that have the properties of a field. Within set theory, many collections of sets turn out to be proper classes. Examples include the class of all sets (the universal class), the class of all ordinal numbers, and the class of all cardinal numbers. One way to prove that a class is proper is to place it in bijection with the class of all ordinal numbers. This method is used, for example, in the proof that there is no free complete lattice on three or more generators. Paradoxes The paradoxes of naive set theory can be explained in terms of the inconsistent tacit assumption that "all classes are sets". With a rigorous foundation, these paradoxes instead suggest proofs that certain classes are proper (i.e., that they are not sets). For example, Russell's paradox suggests a proof that the class of all sets which do not contain themselves is proper, and the Burali-Forti paradox suggests that the class of all ordinal numbers is proper. The paradoxes do not arise with classes because there is no notion of classes containing classes. Otherwise, one could, for example, define a class of all classes that do not contain themselves, which would lead to a Russell paradox for classes. A conglomerate, on the other hand, can have proper classes as members. Classes in formal set theories ZF set theory does not formalize the notion of classes, so each formula with classes must be reduced syntactically to a formula without classes. For example, one can reduce the formula to . For a class and a set variable symbol , it is necessary to be able to expand each of the formulas , , , and into a formula without an occurrence of a class.p. 339 Semantically, in a metalanguage, the classes can be described as equivalence classes of logical formulas: If is a structure interpreting ZF, then the object language "class-builder expression" is interpreted in by the collection of all the elements from the domain of on which holds; thus, the class can be described as the set of all predicates equivalent to (which includes itself). In particular, one can identify the "class of all sets" with the set of all predicates equivalent to . Because classes do not have any formal status in the theory of ZF, the axioms of ZF do not immediately apply to classes. However, if an inaccessible cardinal is assumed, then the sets of smaller rank form a model of ZF (a Grothendieck universe), and its subsets can be thought of as "classes". In ZF, the concept of a function can also be generalised to classes. A class function is not a function in the usual sense, since it is not a set; it is rather a formula with the property that for any set there is no more than one set such that the pair satisfies . For example, the class function mapping each set to its powerset may be expressed as the formula . The fact that the ordered pair satisfies may be expressed with the shorthand notation . Another approach is taken by the von Neumann–Bernays–Gödel axioms (NBG); classes are the basic objects in this theory, and a set is then defined to be a class that is an element of some other class. However, the class existence axioms of NBG are restricted so that they only quantify over sets, rather than over all classes. This causes NBG to be a conservative extension of ZFC. Morse–Kelley set theory admits proper classes as basic objects, like NBG, but also allows quantification over all proper classes in its class existence axioms. This causes MK to be strictly stronger than both NBG and ZFC. In other set theories, such as New Foundations or the theory of semisets, the concept of "proper class" still makes sense (not all classes are sets) but the criterion of sethood is not closed under subsets. For example, any set theory with a universal set has proper classes which are subclasses of sets.
Mathematics
Set theory
null
20728
https://en.wikipedia.org/wiki/Mathematical%20formulation%20of%20quantum%20mechanics
Mathematical formulation of quantum mechanics
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space. These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables, which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables. Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of differential geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the development of quantum mechanics (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space. History of the formalism The "old quantum theory" and the need for new mathematics In the 1890s, Planck was able to derive the blackbody spectrum, which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, , is now called the Planck constant in his honor. In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons. All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of the Planck constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time. In 1923, de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system. The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity. The "new quantum theory" Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent. Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization. Already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics – the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form. Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in many types of generalizations of the field. The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. Later developments The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. Path integral formulation Phase-space formulation of quantum mechanics & geometric quantization quantum field theory in curved spacetime axiomatic, algebraic and constructive quantum field theory C*-algebra formalism Generalized statistical model of quantum mechanics A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics. Postulates of quantum mechanics A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a phase space formulated by symplectic manifold, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.) The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms. Description of the state of a system Each isolated physical system is associated with a (topologically) separable complex Hilbert space with inner product . Separability is a mathematically convenient hypothesis, with the physical interpretation that the state is uniquely determined by countably many observations. Quantum states can be identified with equivalence classes in , where two vectors (of length 1) represent the same state if they differ only by a phase factor. As such, quantum states form a ray in projective Hilbert space, not a vector. Many textbooks fail to make this distinction, which could be partly a result of the fact that the Schrödinger equation itself involves Hilbert-space "vectors", with the result that the imprecise use of "state vector" rather than ray is very difficult to avoid. Accompanying Postulate I is the composite system postulate: In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally cannot be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem). In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as , where . If there is only a single non-zero , then the state can be expressed just as and is called simply separable or product state. Measurement on a system Description of physical quantities Physical observables are represented by Hermitian matrices on . Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized. Results of measurement By spectral theory, we can associate a probability measure to the values of in any state . We can also show that the possible values of the observable in any state must belong to the spectrum of . The expectation value (in the sense of probability theory) of the observable for the system in state represented by the unit vector ∈ H is . If we represent the state in the basis formed by the eigenvectors of , then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue. For a mixed state , the expected value of in the state is , and the probability of obtaining an eigenvalue in a discrete, nondegenerate spectrum of the corresponding observable is given by . If the eigenvalue has degenerate, orthonormal eigenvectors , then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace: and then . Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics. Effect of measurement on the state When a measurement is performed, only one result is obtained (according to some interpretations of quantum mechanics). This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore, the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured. For a mixed state , after obtaining an eigenvalue in a discrete, nondegenerate spectrum of the corresponding observable , the updated state is given by . If the eigenvalue has degenerate, orthonormal eigenvectors , then the projection operator onto the eigensubspace is . Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s). Note that the projection-valued measures (PVM) described in the measurement postulate(s) can be generalized to positive operator-valued measures (POVM), which is the most general kind of measurement in quantum mechanics. A POVM can be understood as the effect on a component subsystem when a PVM is performed on a larger, composite system (see Naimark's dilation theorem). Time evolution of a system Though it is possible to derive the Schrödinger equation, which describes how a state vector evolves in time, most texts assert the equation as a postulate. Common derivations include using the de Broglie hypothesis or path integrals. Equivalently, the time evolution postulate can be stated as: For a closed system in a mixed state , the time evolution is . The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism) and quantum instruments, and generally does not have to be unitary. Other implications of the postulates Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily due to Wigner's theorem (supersymmetry is another matter entirely). Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states. One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. Recent research has shown that the composite system postulate (tensor product postulate) can be derived from the state postulate (Postulate I) and the measurement postulates (Postulates II); Moreover, it has also been shown that the measurement postulates (Postulates II) can be derived from "unitary quantum mechanics", which includes only the state postulate (Postulate I), the composite system postulate (tensor product postulate) and the unitary evolution postulate (Postulate III). Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below. Spin In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position and time as continuous variables, . For spin wavefunctions the spin is an additional discrete variable: , where takes the values; That is, the state of a single particle with spin is represented by a -component spinor of complex-valued wave functions. Two classes of particles with very different behaviour are bosons which have integer spin (), and fermions possessing half-integer spin (). Symmetrization postulate In quantum mechanics, two particles can be distinguished from one another using two methods. By performing a measurement of intrinsic properties of each particle, particles of different types can be distinguished. Otherwise, if the particles are identical, their trajectories can be tracked which distinguishes the particles based on the locality of each particle. While the second method is permitted in classical mechanics, (i.e. all classical particles are treated with distinguishability), the same cannot be said for quantum mechanical particles since the process is infeasible due to the fundamental uncertainty principles that govern small scales. Hence the requirement of indistinguishability of quantum particles is presented by the symmetrization postulate. The postulate is applicable to a system of bosons or fermions, for example, in predicting the spectra of helium atom. The postulate, explained in the following sections, can be stated as follows: Exceptions can occur when the particles are constrained to two spatial dimensions where existence of particles known as anyons are possible which are said to have a continuum of statistical properties spanning the range between fermions and bosons. The connection between behaviour of identical particles and their spin is given by spin statistics theorem. It can be shown that two particles localized in different regions of space can still be represented using a symmetrized/antisymmetrized wavefunction and that independent treatment of these wavefunctions gives the same result. Hence the symmetrization postulate is applicable in the general case of a system of identical particles. Exchange Degeneracy In a system of identical particles, let P be known as exchange operator that acts on the wavefunction as: If a physical system of identical particles is given, wavefunction of all particles can be well known from observation but these cannot be labelled to each particle. Thus, the above exchanged wavefunction represents the same physical state as the original state which implies that the wavefunction is not unique. This is known as exchange degeneracy. More generally, consider a linear combination of such states, . For the best representation of the physical system, we expect this to be an eigenvector of P since exchange operator is not excepted to give completely different vectors in projective Hilbert space. Since , the possible eigenvalues of P are +1 and −1. The states for identical particle system are represented as symmetric for +1 eigenvalue or antisymmetric for -1 eigenvalue as follows: The explicit symmetric/antisymmetric form of is constructed using a symmetrizer or antisymmetrizer operator. Particles that form symmetric states are called bosons and those that form antisymmetric states are called as fermions. The relation of spin with this classification is given from spin statistics theorem which shows that integer spin particles are bosons and half integer spin particles are fermions. Pauli exclusion principle The property of spin relates to another basic property concerning systems of identical particles: the Pauli exclusion principle, which is a consequence of the following permutation behaviour of an -particle wave function; again in the position representation one must postulate that for the transposition of any two of the particles one always should have i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor which is for bosons, but () for fermions. Electrons are fermions with ; quanta of light are bosons with . Due to the form of anti-symmetrized wavefunction: if the wavefunction of each particle is completely determined by a set of quantum number, then two fermions cannot share the same set of quantum numbers since the resulting function cannot be anti-symmetrized (i.e. above formula gives zero). The same cannot be said of Bosons since their wavefunction is: where is the number of particles with same wavefunction. Exceptions for symmetrization postulate In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension can one construct entities where is replaced by an arbitrary complex number with magnitude 1, called anyons. In relativistic quantum mechanics, spin statistic theorem can prove that under certain set of assumptions that the integer spins particles are classified as bosons and half spin particles are classified as fermions. Anyons which form neither symmetric nor antisymmetric states are said to have fractional spin. Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties. Mathematical structure of quantum mechanics Pictures of dynamics Summary: Representations The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations of quantization, the deformation extension from classical to quantum mechanics. The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent. Time as an operator The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated with a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter , and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in would be generated by a "Hamiltonian" , where E is the energy operator and is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of (this requires the use of a rigged Hilbert space and a renormalization of the norm). This is related to the quantization of constrained systems and quantization of gauge theories. It is also possible to formulate a quantum theory of "events" where time becomes an observable. Problem of measurement The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement. The von Neumann description of quantum measurement of an observable , when the system is prepared in a pure state is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain): Let have spectral resolution where is the resolution of the identity (also called projection-valued measure) associated with . Then the probability of the measurement outcome lying in an interval of is . In other words, the probability is obtained by integrating the characteristic function of against the countably additive measure If the measured value is contained in , then immediately after the measurement, the system will be in the (generally non-normalized) state . If the measured value does not lie in , replace by its complement for the above state. For example, suppose the state space is the -dimensional complex Hilbert space and is a Hermitian matrix with eigenvalues , with corresponding eigenvectors . The projection-valued measure associated with , , is then where is a Borel set containing only the single eigenvalue . If the system is prepared in state Then the probability of a measurement returning the value can be calculated by integrating the spectral measure over . This gives trivially The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate. A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections by a finite set of positive operators whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is . Instead of collapsing to the (unnormalized) state after the measurement, the system now will be in the state Since the operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds. The same formulation applies to general mixed states. In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace. In any case it seems that the above-mentioned problems can only be resolved if the time evolution included not only the quantum system, but also, and essentially, the classical measurement apparatus (see above). List of mathematical tools Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new. The main tools include: linear algebra: complex numbers, eigenvectors, eigenvalues functional analysis: Hilbert spaces, linear operators, spectral theory differential equations: partial differential equations, separation of variables, ordinary differential equations, Sturm–Liouville theory, eigenfunctions harmonic analysis: Fourier transforms
Physical sciences
Quantum mechanics
Physics
20734
https://en.wikipedia.org/wiki/Microlith
Microlith
A microlith is a small stone tool usually made of flint or chert and typically a centimetre or so in length and half a centimetre wide. They were made by humans from around 35,000 years ago, across Europe, Africa, Asia and Australia. The microliths were used in spear points and arrowheads. Microliths are produced from either a small blade (microblade) or a larger blade-like piece of flint by abrupt or truncated retouching, which leaves a very typical piece of waste, called a microburin. The microliths themselves are sufficiently worked so as to be distinguishable from workshop waste or accidents. Two families of microliths are usually defined: laminar and geometric. An assemblage of microliths can be used to date an archeological site. Laminar microliths are slightly larger, and are associated with the end of the Upper Paleolithic and the beginning of the Epipaleolithic era; geometric microliths are characteristic of the Mesolithic and the Neolithic. Geometric microliths may be triangular, trapezoid or lunate. Microlith production generally declined following the introduction of agriculture (8000 BCE) but continued later in cultures with a deeply rooted hunting tradition. Regardless of type, microliths were used to form the points of hunting weapons, such as spears and (in later periods) arrows, and other artifacts and are found throughout Africa, Asia and Europe. They were utilised with wood, bone, resin and fiber to form a composite tool or weapon, and traces of wood to which microliths were attached have been found in Sweden, Denmark and England. An average of between six and eighteen microliths may often have been used in one spear or harpoon, but only one or two in an arrow. The shift from earlier larger tools had an advantage. Often the haft of a tool was harder to produce than the point or edge: replacing dull or broken microliths with new easily portable ones was easier than making new hafts or handles. Types Laminar and non-geometric microliths Laminar microliths date from at least the Gravettian culture or possibly the start of the Upper Paleolithic era, and they are found all through the Mesolithic and Neolithic eras. "Noailles" burins and micro-gravettes indicate that the production of microliths had already started in the Gravettian culture. This style of flint working flourished during the Magdalenian period and persisted in numerous Epipaleolithic traditions all around the Mediterranean basin. These microliths are slightly larger than the geometric microliths that followed and were made from the flakes of flint obtained ad hoc from a small nucleus or from a depleted nucleus of flint. They were produced either by percussion or by the application of a variable pressure (although pressure is the best option, this method of producing microliths is complicated and was not the most commonly used technique). Truncated blade There are three basic types of laminar microlith. The truncated blade type can be divided into a number of sub-types depending on the position of the truncation (for example, oblique, square or double) and according to its form, for example, concave or convex. "Raclette scrapers" are notable for their particular form, being blades or flakes whose edges have been sharply retouched until they are semicircular or even shapeless. Raclettes are indefinite cultural indicators, as they appear from the Upper Paleolithic through to the Neolithic. Backed edge blades Backed edge blades have one of the edges, generally a side one, rounded or chamfered by abrupt retouching. There are fewer types of these blades, and may be divided into those where the entire edge is rounded and those where only a part is rounded, or even straight. They are fundamental in the blade-forming processes, and from them, innumerable other types were developed. Dufour bladelets are up to three centimeters in length, finely shaped with a curved profile whose retouches are semi-abrupt and which characterize a particular phase of the Aurignacian period. Solutrean backed edge blades display pronounced and abrupt retouching, so that they are long and narrow and, although rare, characterize certain phases of the Solutrean period. Ouchtata bladelets are similar to the others, except that the retouched back is not uniform but irregular; this type of microlith characterizes certain periods of the Epipaleolithic Saharans. The Ibero-Maurusian and the Montbani bladelet, with a partial and irregular lateral retouching, is characteristic of the Italian Tardenoisian. Micro points These are very sharp bladelets formed by abrupt retouching. There are a huge number of regional varieties of these microliths, nearly all of which are very hard to distinguish (especially those from the western area) without knowing the archaeological context in which they appear. The following is a small selection. Omitted are the foliaceous tips (also called leafed tips), which are characterized by a covering retouch and which constitute a group apart. The Châtelperrón point is not a true microlith, although it is close to the required dimensions. Its antiquity and its short, curved blade edge make it the antecedent of many laminar microliths. The Micro-gravette or Gravette micro point is a microlith version of the Gravette point and is a narrow bladelet with an abrupt retouch, which gives it a characteristically sharp edge when compared to other types. The Azilian point links the Magdalenian microlith points with those from the western Epipaleolithic. They can be identified by a rough and invasive retouching. The Ahrensburgian point is also a peripheral paleolithic or western Epipaleolithic piece, but with a more specific morphology, as it is formed on a blade (not on a bladelet), is obliquely truncated and has a small tongue that possibly served as a haft on a spear point. The next group contains a number of points from the Middle East characterized as cultural markers. The Emireh point from the Upper Paleolithic is almost the same as one found in Châtelperrón, which is likely to be contemporary, although they are slightly shorter and also appear to be fashioned from a blade and not a bladelet. The El-Wad point is from the end of the Upper Paleolithic from the same area, made from a very long, thin bladelet. The El-Khiam point has been identified by the Spanish archeologist González Echegaray in Protoneolithic sites in Jordan. They are little known but easy to identify by two basal notches, doubtless used as a haft. The Adelaide point is found in Australia. Its construction, based on truncations on a blade, has a nearly trapezoidal form. The Adelaide point emphasizes the range of variation in both time and culture of the laminar microliths; it also shows their technological differences, but sometimes morphological similarities, with geometric microliths. Laminar microliths can also sometimes be described as trapezoidal, triangular or lunate. However, they are distinct from the geometric microliths because of the strokes used in the manufacture of geometric microliths, which mainly involved the microburin technique. Geometric microliths Geometric microliths are a clearly defined type of stone tool, at least in their basic forms. They can be divided into trapezoid, triangular and lunate (half-moon) forms, although there are many subdivisions of each of these types. A microburin is included among the illustrations below because, although it is not a geometrical microlith (or even a tool), it is now seen as a characteristic waste product from the manufacture of these geometric microliths: Geometric microliths, though rare, are present as trapezoids in Northwest Africa in the Iberomaurusian. They later appear in Europe in the Magdalenian initially as elongated triangles and later as trapezoids (although the microburin technique is seen from the Perigordian), they are mostly seen during the Epipaleolithic and the Neolithic. They remained in existence even into the Copper Age and Bronze Age, competing with "leafed" and then metallic arrowheads. Microburin technique All the currently known geometric microliths share the same fundamental characteristics – only their shapes vary. They were all made from blades or from microblades (nearly always of flint), using the microburin technique (which implies that it is not possible to conserve the remains of the heel or the conchoidal flakes from the blank). The pieces were then finished by a percussive retouching of the edges (generally leaving one side with the natural edge of the blank), giving the piece its definitive polygonal form. For example, in order to make a triangle, two adjacent notches were retouched, leaving free the third edge or base (using the terminology of Fortea). They generally have one long axis and concave or convex edges, and it is possible for them to have a gibbosity (hump) or indentations. Triangular microliths may be isosceles, scalene or equilateral. In the case of trapezoid geometric microliths, on the other hand, the notches are not retouched, leaving a portion of the natural edge between them. Trapezoids can be further subdivided into symmetrical, asymmetrical and those with concave edges. Lunate microliths have the least diversity of all and may be either semicircular or segmental. Archeological findings and the analysis of wear marks, or use-wear analysis, has shown that, predictably, the tips of spears, harpoons and other light projectiles of varying size received the most wear. Microliths were also used from the Neolithic on arrows, although a decline in this use coincided with the appearance of bifacial or "leafed" arrowheads that became widespread in the Chalcolithic period, or Copper Age (that is, stone arrowheads were increasingly made by a different technique during this later period). Weapons and tools Not all the different types of laminar microliths had functions that are clearly understood. It is likely that they contributed to the points of spears or light projectiles, and their small size suggests that they were fixed in some way to a shaft or handle. Backed edge bladelets are particularly abundant at a site in France that preserves habitation from the late Magdalenian – the Pincevent. In the remains of some of the hearths at this location, bladelets are found in groups of three, perhaps indicating that they were mounted in threes on their handles. A javelin tip made of horn has been found at this site with grooves made for flint bladelets that could have been secured using a resinous substance. Signs of much wear and tear have been found on some of these finds. Specialists have carried out lithic or microwear analysis on artefacts, but it has sometimes proved difficult to distinguish those fractures made during the process of fashioning the flint implement from those made during its use. Microliths found at Hengistbury Head in Dorset, England, show features that can be confused with chisel marks, but which might also have been produced when the tip hit a hard object and splintered. Microliths from other locations have presented the same problems of interpretation. An exceptional piece of evidence for the use of microliths has been found in the excavations of the cave at Lascaux in the French Dordogne. Twenty backed edge bladelets were found with the remains of a resinous substance and the imprint of a circular handle (a horn). It appears that the bladelets might have been fixed in groups like the teeth of a harpoon or similar weapon. In all these locations, the microliths found have been backed edge blades, tips and crude flakes. Despite the great number of geometric microliths that have been found in Western Europe, few examples show any clear evidence of their use, and all the examples are from the Mesolithic or Neolithic periods. Despite this, there is unanimity amongst researchers that these items were used to increase the penetrating potential of light projectiles such as harpoons, assegais, javelins and arrows. Discoveries Australia The most common form of microliths found in Australia are backed artefacts. The earliest backed artefacts have been dated to the terminal Pleistocene, however they become increasingly common in Aboriginal Australian societies in the mid-Holocene, before declining in use and disappearing from the archaeological record approximately 1000 years before the British invasion of the continent in 1788. The cause of this proliferation event is debated amongst archaeologists. Geographically they are found across almost all of continental Australia, except for the far north, but are particularly common in south-east Australia. Historically, backed artefacts were divided into asymmetrical Bondi points and symmetrical geometric microliths, however there appears to be no geographic or temporal pattern in the distribution of these shapes. Backed artefact manufacturing workshops have been identified at Ngungara show significant variation in shape, which has been linked to the need to replace components of composite tools. Several studies in the production of backed artefacts have linked identified heat treatment as a key component as well as the use of large flank blanks. Functional studies of backed artefacts from south-eastern Australia show that they were multipurpose and multifunctional tools with a similar range of uses as unretouched flakes found at the same sites. There is one unambiguous example of them being used as part of composite weapon, either a spear or a club, as 17 backed artefacts were found embedded into the skeleton of an adult male dated to approximately 4000 years BP in the Sydney suburb of Narrabeen. France In France, one unusual site stands out: the Mesolithic cemetery of Téviec, an island in Brittany. Numerous flint microliths were discovered here. They are believed to date to between 6740 and 5680 years BP - quite a long occupation. The end of the settlement came at the beginning of the Neolithic period. One of the skeletons that has been found has a geometric microlith lodged in one of its vertebra. All indications suggest that the person died because of this projectile; whether by intention or by accident is unknown. It is widely agreed that geometric microliths were mainly used in hunting and fishing, but they may also have been used as weapons. Scandinavia Well-preserved examples of arrows with microliths in Scandinavia have been found at Loshult, at Osby in Sweden, and Tværmose, at Vinderup in Denmark. These finds, which have been preserved practically intact due to the special conditions of the peat bogs, have included wooden arrows with microliths attached to the tip by resinous substances and cords. According to radiocarbon measurements, the Loshult arrows are dated to around 8000 BC, which represents a middle part of the Maglemose culture. This is close to the Early Boreal/Late Boreal transition. England There are many examples of possible tools from Mesolithic deposits in England. Possibly the best known is a microlith from Star Carr in Yorkshire that retains residues of resin, probably used to fix it to the tip of a projectile. Recent excavations have found other examples. Archeologists at the Risby Warren V site in Lincolnshire have uncovered a row of eight triangular microliths that are equidistantly aligned along a dark stain indicating organic remains (possibly the wood from an arrow shaft). Another clear indication is from the Readycon Dene site in West Yorkshire, where 35 microliths appear to be associated with a single projectile. In Urra Moor, North Yorkshire, 25 microliths give the appearance of being related to one another, due to the extreme regularity and symmetry of their arrangement in the ground. The study of English and European artifacts in general has revealed that projectiles were made with a widely variable number of microliths: in Tværmose there was only one, in Loshult there were two (one for the tip and the other as a fin), in White Hassocks, in West Yorkshire, more than 40 have been found together; the average is between 6 and 18 pieces for each projectile. India Early research regard the microlithic industry in India as a Holocene phenomenon, however a new research provides solid data to put the South Asia microliths industry up to 45 ka across whole South Asia subcontinent. This new research also synthesizes the data from genetic, paleoenvironmental and archaeological research, and proposes that the emergence of microlith in India subcontinent could reflect the increase of population and adaptation of environmental deterioration. Sri Lanka In 1968 human burials sites were uncovered inside the Fa Hien Cave in Sri Lanka. A further excavation in 1988 yielded microlith stone tools, remnants of prehistoric fireplaces and organic material, such as floral and human remains. Radiocarbon dating indicates that the cave had been occupied from about 33,000 years ago, the Late Pleistocene and Mesolithic to 4,750 years ago, the Neolithic in the Middle Holocene. Human remains of the several sediment deposits were analyzed at Cornell University and studied by Kenneth A. R. Kennedy and graduate student Joanne L. Zahorsky. Sri Lanka has yielded the earliest known microliths, which did not appear in Europe until the Early Holocene. 2019 study found Fa-Hien Lena cave microlith assemblage represents the earliest microlith assemblage in South Asia dating back to c. 48,000–45,000 years ago. Dating Laminar microliths are common artifacts from the Upper Paleolithic and the Epipaleolithic, to such a degree that numerous studies have used them as markers to date different phases of prehistoric cultures. During the Epipaleolithic and the Mesolithic, the presence of laminar or geometric microliths serves to date the deposits of different cultural traditions. For instance, in the Atlas Mountains of northwest Africa, the end of the Upper Paleolithic period coincides with the end of the Aterian tradition of producing laminar microliths, and deposits can be dated by the presence or absence of these artifacts. In the Near East, the laminar microliths of the Kebarian culture were superseded by the geometric microliths of the Natufian tradition a little more than 11,000 years ago. This pattern is repeated throughout the Mediterranean basin and across Europe in general. A similar thing is found in England, where the preponderance of elongated microliths, as opposed to other frequently occurring forms, has permitted the Mesolithic to be separated into two phases: the Earlier Mesolithic of about 8300–6700 BCE, or the ancient and laminar Mesolithic, and the Later Mesolithic, or the recent and geometric Mesolithic. Deposits can be thus dated based upon the assemblage of artifacts found.
Technology
Hand tools
null
20739
https://en.wikipedia.org/wiki/Monorail
Monorail
A monorail is a railway in which the track consists of a single rail or beam. Colloquially, the term "monorail" is often used to describe any form of elevated rail or people mover. More accurately, the term refers to the style of track. Monorail systems are most frequently implemented in large cities, airports, and theme parks. Etymology The term possibly originated in 1897 from German engineer Eugen Langen, who called an elevated railway system with wagons suspended the Eugen Langen One-railed Suspension Tramway (Einschieniges Hängebahnsystem Eugen Langen). Differentiation from other transport systems Monorails have found applications in airport transfers and medium capacity metros. To differentiate monorails from other transport modes, the Monorail Society defines a monorail as a "single rail serving as a track for passenger or freight vehicles. In most cases, rail is elevated, but monorails can also run at grade, below grade, or in subway tunnels. Vehicles either are suspended from or straddle a narrow guide way. Monorail vehicles are wider than the guideway that supports them." Similarities Monorails are often elevated, sometimes leading to confusion with other elevated systems such as the Docklands Light Railway, Vancouver SkyTrain, the AirTrain JFK and cable propelled systems like the Cable Liner people mover which run on two rails. Monorail vehicles often appear similar to light rail vehicles, and can be staffed or unstaffed. They can be individual rigid vehicles, articulated single units, or multiple units coupled into trains. Like other advanced rapid transit systems, monorails can be driven by linear induction motors; like conventional railways, vehicle bodies can be connected to the beam via bogies, allowing curves to be negotiated. Monorails are sometimes used in urban areas alongside conventional parallel railed metro systems. Mumbai Monorail serves alongside Mumbai Metro, while monorail lines are integrated with conventional rail rapid transit lines in Bangkok's MRT network. Differences Unlike some trams and light rail systems, modern monorails are always separated from other traffic and pedestrians due to the geometry of the rail. They are both guided and supported via interaction with the same single beam, in contrast to other guided systems like rubber-tyred metros, such as the Sapporo Municipal Subway; or guided buses or trams, such as Translohr. Monorails can also use pantographs. As with other grade-separated transit systems, monorails avoid red lights, intersection turns, and traffic jams. Surface-level trains, buses, automobiles, and pedestrians can collide each one with the other, while vehicles on dedicated, grade-separated rights-of-way such as monorails can collide only with other vehicles on the same system, with much fewer opportunities for collision. As with other elevated transit systems, monorail passengers receive sunlight and views. Monorails can be quieter than diesel buses and trains. They obtain electricity from the track structure, whereas other modes of transit may use either third rail or overhead power lines and poles. Compared to the elevated train systems of New York, Chicago, and elsewhere, a monorail beamway casts a narrow shadow. Conversely, monorails can be more expensive than light-rail systems that do not include tunnels. In addition, monorails must either remain above ground or use larger tunnels than conventional rail systems, and they require complex track-switching equipment. Maglev Under the Monorail Society's beam-width criterion, some, but not all, maglev systems are considered monorails, such as the Transrapid and Linimo. Maglevs differ from other monorails in that they do not physically contact the beam while moving. History Early years The first monorail prototype was made in Russia in 1820 by Ivan Elmanov. Attempts at creating monorail alternatives to conventional railways have been made since the early part of the 19th century. The Centennial Monorail was featured at the Centennial Exposition in Philadelphia in 1876. Based on its design the Bradford and Foster Brook Railway was built in 1877 and ran for one year from January 1878 until January 1879. Around 1879 a "one-rail" system was proposed independently by Haddon and by Stringfellow, which used an inverted "V" rail (and thus shaped like "Λ" in cross-section). It was intended for military use, but was also seen to have civilian use as a "cheap railway." Similarly, one of the first systems put into practical use was that of French engineer Charles Lartigue, who built a line between Ballybunion and Listowel in Ireland, opened in 1888 and lasting 36 years, being closed in 1924 (due to damage from Ireland's Civil War). It used a load-bearing single rail and two lower, external rails for balance, the three carried on triangular supports. It was cheap to construct but tricky to operate. Possibly the first monorail locomotive was a 0-3-0 steam locomotive on this line. A high-speed monorail using the Lartigue system was proposed in 1901 between Liverpool and Manchester. The Boynton Bicycle Railroad was a steam-powered monorail in Brooklyn on Long Island, New York. It ran on a single load-bearing rail at ground level, but with a wooden overhead stabilising rail engaged by a pair of horizontally opposed wheels. The railway operated for only two years beginning in 1890. The Hotchkiss Bicycle Railroad was a monorail on which a matching pedal bicycle could be ridden. The first example was built between Smithville and Mount Holly, New Jersey, in 1892. It closed in 1897. Other examples were built in Norfolk from 1895 to 1909, Great Yarmouth, and Blackpool, UK from 1896. 1900s–1950s Early designs used a double-flanged single metal rail alternative to the double rail of conventional railways, both guiding and supporting the monorail car. A surviving suspended version is the oldest still in service system: the Wuppertal monorail in Germany. Also in the early 1900s, Gyro monorails with cars gyroscopically balanced on top of a single rail were tested, but never developed beyond the prototype stage. The Ewing System, used in the Patiala State Monorail Trainways in Punjab, India, relies on a hybrid model with a load-bearing single rail and an external wheel for balance. A highspeed monorail using the Lartigue system was proposed in 1901 between Liverpool and Manchester. In 1910, the Brennan gyroscopic monorail was considered for use to a coal mine in Alaska. In June 1920, the French Patent Office published FR 503782, by Henri Coanda, on a 'Transporteur Aérien' -Air Carrier. One of the first monorails planned in the United States was in New York City in the early 1930s, scrubbed for an elevated train system. The first half of the 20th century saw many further proposed designs that either never left the drawing board or remained short-lived prototypes. One of the most interesting projects created on the layout was the ball-bearing train by Nikolai Grigorievich Yarmolchuk. This train moved on spherical wheels with electric motors embedded in them, which were located in semi-circular chutes under a wooden platform (in the full-scale project the trestle would have been concrete). A model train, built to 1/5 scale to test the vehicle concept, was capable of reaching speeds of up to 70 km/h. The full-scale project was expected to reach speeds of up to 300 km/h. 1950s–1980s In the latter half of the 20th century, monorails had settled on using larger beam- or girder-based track, with vehicles supported by one set of wheels and guided by another. In the 1950s, a 40% scale prototype of a system designed for speed of on straight stretches and on curves was built in Germany. There were designs with vehicles supported, suspended or cantilevered from the beams. In the 1950s the ALWEG straddle design emerged, followed by an updated suspended type, the SAFEGE system. Versions of ALWEG's technology are used by the two largest monorail manufacturers, Hitachi Monorail and Bombardier. In 1956, the first monorail to operate in the US began test operations in Houston, Texas. Disneyland in Anaheim, California, opened the United States' first daily operating monorail system in 1959. Later during this period, additional monorails were installed at Walt Disney World in Florida, Seattle, and in Japan. Monorails were promoted as futuristic technology with exhibition installations and amusement park purchases, as seen by the legacy systems in use today. However, monorails gained little foothold compared to conventional transport systems. In March 1972, Alejandro Goicoechea-Omar had patent DE1755198 published, on a 'Vertebrate Train', build as experimental track in Las Palmas de Gran Canaria, Spain. Niche private enterprise uses for monorails emerged, with the emergence of air travel and shopping malls, with shuttle-type systems being built. 1980's–present From the 1980s, most monorail mass transit systems are in Japan, with a few exceptions. Tokyo Monorail, is one of the world's busiest, averages 127,000 passengers per day and has served over 1.5 billion passengers since 1964. China recently started development of monorails in the late 2000s, already home to the world's largest and busiest monorail system and has a number of mass transit monorails under construction in several of cities. A Bombardier Innovia Monorail-based system is under construction in Wuhu and several "Cloudrail" systems developed by BYD under construction a number of cities such as Guang'an, Liuzhou, Bengbu and Guilin. Monorails have seen continuing use in niche shuttle markets and amusement parks. Modern mass transit monorail systems use developments of the ALWEG beam and tyre approach, with only two suspended types in large use. Monorail configurations have also been adopted by maglev trains. Since the 2000s, with the rise of traffic congestion and urbanization, there has been a resurgence of interest in the technology for public transport with a number of cities, such as Malta and Istanbul, today investigating monorails as a possible mass transit solution. In 2004, Chongqing Rail Transit in China adopted a unique ALWEG-based design with rolling stock that is much wider than most monorails, with capacity comparable to heavy rail. This is because Chongqing is criss-crossed by numerous hills, mountains and rivers, therefore tunneling is not feasible except in some cases (for example, lines 1 and 6) due to the extreme depth involved. Today it is the largest and busiest monorail system in the world. In July 2009, two Walt Disney World monorails collided, killing one of the drivers and injuring seven passengers. The National Transportation Safety Board found the cause of the accident to be human error by both the driver and controller, contributed to by a lack of standard operating procedures. São Paulo, Brazil, is building two high-capacity monorail lines as part of its public transportation network. Line 15 was partially opened in 2014, will be long when completed in 2022 and has a capacity of 40,000 pphpd using Bombardier Innovia Monorail trains. Line 17 will be long and is using the BYD SkyRail design. Other significant monorail systems are under construction such as two lines for the Cairo Monorail, two lines for the MRT (Bangkok) and the SkyRail Bahia in Brazil. Types and technical aspects Modern monorails depend on a large solid beam as the vehicles' running surface. There are a number of competing designs divided into two broad classes, straddle-beam and suspended monorails. The most common type is the straddle-beam, in which the train straddles a steel or reinforced concrete beam wide. A rubber-tired carriage contacts the beam on the top and both sides for traction and to stabilize the vehicle. The style was popularized by the German company ALWEG. There is also a historical type of suspension monorail developed by German inventors Nicolaus Otto and Eugen Langen in the 1880s. It was built in the twin cities of Barmen and Elberfeld in Wuppertal, Germany, opened in 1901, and is still in operation. The Chiba Urban Monorail is the world's largest suspended network. Power Almost all modern monorails are powered by electric motors fed by dual third rails, contact wires or electrified channels attached to or enclosed in their guidance beams, but diesel-powered monorail systems also exist. Historically some systems, such as the Lartigue Monorail, used steam locomotives. Magnetic levitation Magnetic levitation train (maglev) systems such as the German Transrapid were built as straddle-type monorails. The Shanghai Maglev Train runs in commercial operation at , and there are also slower maglev monorails intended for urban transport in Japan (Linimo), Korea (Incheon Airport Maglev) and China (Beijing Subway Line S1 and the Changsha Maglev Express). However, it is argued that the larger width of the guideway for the maglevs makes it not legitimate to be called monorails. Switching Some early monorails (notably the suspended monorail at Wuppertal, Germany) have a design that makes it difficult to switch from one line to another. Some other monorails avoid switching as much as possible by operating in a continuous loop or between two fixed stations, as in the Seattle Center Monorail. Current monorails are capable of more efficient switching than in the past. With suspended monorails, switching may be accomplished by moving flanges inside the beamway to shift trains to one line or another. Straddle-beam monorails require that the beam moves for switching, which was an almost prohibitively ponderous procedure. Now the most common way of achieving this is to place a moving apparatus on top of a sturdy platform capable of bearing the weight of vehicles, beams and its own mechanism. Multiple-segmented beams move into place on rollers to smoothly align one beam with another to send the train in its desired direction, with the design originally developed by ALWEG capable of completing a switch in 12 seconds. Some of these beam turnouts are quite elaborate, capable of switching between several beams or simulating a railroad double-crossover. Vehicle specifications are generally not open to the public, as is standard for rolling stock built for public services. An alternative to using a wye or other form of switch, is to use a turntable, where a car sits upon a section of track that can be reoriented to several different tracks. For example, this can be used to switch a car from being in a storage location, to being on the main line. The now-closed Sydney Monorail had a traverser at the depot, which allowed a train on the main line to be exchanged with another from the depot. There were about six lines in the depot, including one for maintenance. Grades Rubber-tired monorails are typically designed to cope with a 6% grade. Rubber-tired light rail or metro lines can cope with similar or greater grades – for example, the Lausanne Metro has grades of up to 12% and the Montreal Metro up to 6.5%, while VAL systems can handle 7% grades. Monorail systems Manufacturers of monorail rolling stock with operating systems include Hitachi Monorail, BYD, Bombardier Transportation (now Alstom), Scomi, PBTS (a joint venture of CRRC Nanjing Puzhen & Bombardier), Intamin and EMTC. Other developers include CRRC Qingdao Sifang, China Railway Science and Industry Group, Zhongtang Air Rail Technology, Woojin and SkyWay Group. Records Busiest line: Line 3, Chongqing Rail Transit, 682,800 passengers per day (2014 Daily Avg.) Largest system: Chongqing Rail Transit (Lines 2 & 3), Longest straddle-beam line: Line 3, Chongqing Rail Transit, , or if the Jurenba branch is included Largest suspended system: Chiba Urban Monorail, Longest maglev line: Shanghai Maglev Train, Oldest line still in service: Schwebebahn Wuppertal, 1901 In popular culture François Truffaut's 1966 film adaptation of Ray Bradbury's 1953 novel Fahrenheit 451 contains suspended monorail exterior scenes filmed at the French SAFEGE test track in Châteauneuf-sur-Loire near Orléans, France (since dismantled). The Thunderbirds February 1966 episode "Brink of Disaster" is about the financing and building of a high speed driverless cross-country monorail project. Two of the Thunderbirds-crew find themselves trapped on board the a monorail train, and with no possibility of escape, when it is discovered it is speeding towards a stricken bridge. The James Bond film franchise features monorails in three movies, all belonging to the villain. In You Only Live Twice (1967) there is a working ground level monorail inside the SPECTRE volcano base. During Live and Let Die (1973), a prop monorail is shown in the villain's lair on the fictional Caribbean island of San Monique. In the 1977 The Spy Who Loved Me there is working monorail on the villain's supertanker (submarine dock). In 1987, Lego released a monorail among the Futuron Space line. Despite being the most expensive Lego set of its time (due to being massive and including electrical elements), it was very popular, with Lego releasing a Town themed monorail in 1990 and another Space monorail in 1994 among the Unitron line, as well as additional track. The monorail system was also prominent in the unreleased Seatron Space line and prototype Wild West sets. Its popularity has still endured over thirty years later, where Lego has paid homage in promotional sets and fans have manufactured compatible components. The fourth season of the American animated television show The Simpsons features the episode "Marge vs. the Monorail", in which the town of Springfield impulsively purchases a faulty monorail from a confidence trickster at a wildly inflated price. The Monorail Society, an organization with 14,000 members worldwide, has blamed the episode for sullying the reputation of monorails, to which Simpsons creator Matt Groening responded "That's a by-product of our viciousness...Monorails are great, so it makes me sad, but at the same time if something's going to happen in The Simpsons, it's going to go wrong, right?" The 2005 feature film Batman Begins features a monorail, constructed by Bruce Wayne's father through Gotham City, that is part of the climax of the film. The monorail is also included in the spin-off video game. Blaine the Mono is a train featured in Stephen King's The Dark Tower series of books and first appears in The Dark Tower III: The Waste Lands. Monorails have also appeared in a number of other video games including Transport Tycoon (since 1999), Japanese Rail Sim 3D: Monorail Trip to Okinawa by Sonic Powered, SimCity 4: Rush Hour, Cities in Motion 2, Cities: Skylines in the Mass transit expansion pack of 2017, Planet Zoo and a rideable elevated monorail system in the 2020 video game Cyberpunk 2077. Perceptions of monorail as public transport From 1950 to 1980, the monorail concept may have suffered, as with all public transport systems, from competition with the automobile. At the time, the post–World War II optimism in America was riding high and people were buying automobiles in large numbers due to suburbanization and the Interstate Highway System. Monorails in particular may have suffered from the reluctance of public transit authorities to invest in the perceived high cost of un-proven technology when faced with cheaper mature alternatives. There were also many competing monorail technologies, splitting their case further. One notable example of a public monorail is the AMF Monorail that was used as transportation around the 1964–1965 World's Fair. This high-cost perception was challenged most notably in 1963 when the ALWEG consortium proposed to finance the construction of a major system in Los Angeles County, California, in return for the right of operation. This was turned down by the Los Angeles County Board of Supervisors under pressure from Standard Oil of California and General Motors (which were strong advocates for automobile dependency), and the later proposed subway system faced criticism by famed author Ray Bradbury as it had yet to reach the scale of the proposed monorail. Several monorails initially conceived as transport systems survive on revenues generated from tourism, benefiting from the unique views offered from the largely elevated installations. Farm, mining and logistics applications Monorails have been used for number of applications other than passenger transportation. Small suspended monorail are also widely used in factories either as part of moveable assembly lines. History Inspired by the Centennial Monorail demonstrated in 1876, in 1877 the Bradford and Foster Brook Railway began construction of a line connecting Bradford and Foster Township, McKean County in Pennsylvania. The line operated from 1878 until 1879 delivering machinery and oil supplies. The first twin-boiler locomotive wore out quickly. It was replaced by a single boiler locomotive which was too heavy and crashed through the track on its third trip. The third locomotive again had twin boilers. On a trial run one of the boilers ran dry and exploded, killing six people. The railway was closed soon after. Monorails in Central Java were used to transport timber from the forests of Central Java located in the mountains to the rivers. In 1908 and 1909, the forester H. J. L. Beck built a manually operated monorail of limited but sufficient capacity for the transport of small timber and firewood in the Northern Surabaya forest district. In later years, this idea was further developed by L. A. van de Ven, who was a forester in the Grobogan forest district around 1908–1910. Monorails were built by plantation operators and wood processing companies throughout the mountains of Central Java. In 1919/1920, however, the hand-operated monorails gradually disappeared and were replaced by narrow-gauge railways with steam locomotives as forest utilization changed. In the 1920s the Port of Hamburg used a petrol powered, suspended monorail to transport luggage and freight from ocean-going vessels to a passenger depot. In the northern Mojave Desert, the Epsom Salts Monorail was built in 1924. It ran for 28 miles from a connection on the Trona Railway, eastward to harvest epsomite deposits in the Owlshead Mountains. This Lartigue type monorail achieved gradients of up to ten percent. It only operated until June 1926, when the mineral deposits become uneconomic, and was dismantled for scrap in the late 1930s. In the Soviet Union the Lyskovsky monorail in the Nizhny Novgorod region was designed by the engineer of the timber industry Ivan Gorodtsov. A Lartigue type line of about long was opened in November 1934 to connect the village of Selskaya Maza with the villages of Bakaldy and Yaloksha to carry timber. Following this example a separate cargo-and-passenger monorail was built from the town of Bor to the village of Zavrazhnoe, where forest and peat were exploited. The Lyskovsky monorail stopped operating in 1949. The British firm Road Machines (Drayton) Ltd developed a modular-track ground-level monorail system with a high rail segments, long, running between support plates. The first system was sold in 1949 and it was used in industrial, construction and agricultural applications around the world. The company ceased trading in 1967. The system was adapted for the use in the 1967 James Bond film You Only Live Twice. An example of the system exists at the Amberley Museum & Heritage Centre in Britain. Recent applications Very small and lightweight systems are used widely on farms to transport crops such as bananas. First developed in Japan, industrial versions of slope cars are used in agriculture in steep sloped areas such as citrus orchards in Japan and vineyards in Italy. One European manufacturer says they have installed 650 systems worldwide. In the mining industry suspended monorails have been used because of their ability to descend and climb steep tunnels using rack and pinion drive. This significantly reduces cost and length of tunnels, by up to 60% in some cases, which otherwise must be at gentle gradients to suit road vehicles or conventional railways. A suspended monorail capable of carrying fully loaded 20' and 40' containers has been under construction since 2020 at the Port of Qingdao, the first phase of which was put into operation in 2021.
Technology
Trains
null
20812
https://en.wikipedia.org/wiki/Meteor
Meteor
A meteor, known colloquially as a shooting star, is a glowing streak of a small body (usually meteoroid) going through Earth's atmosphere, after being heated to incandescence by collisions with air molecules in the upper atmosphere, creating a streak of light via its rapid motion and sometimes also by shedding glowing material in its wake. Although a meteor may seem to be a few thousand feet from the Earth, meteors typically occur in the mesosphere at altitudes from . The root word meteor comes from the Greek meteōros, meaning "high in the air". Millions of meteors occur in Earth's atmosphere daily. Most meteoroids that cause meteors are about the size of a grain of sand, i.e. they are usually millimeter-sized or smaller. Meteoroid sizes can be calculated from their mass and density which, in turn, can be estimated from the observed meteor trajectory in the upper atmosphere. Meteors may occur in showers, which arise when Earth passes through a stream of debris left by a comet, or as "random" or "sporadic" meteors, not associated with a specific stream of space debris. A number of specific meteors have been observed, largely by members of the public and largely by accident, but with enough detail that orbits of the meteoroids producing the meteors have been calculated. The atmospheric velocities of meteors result from the movement of Earth around the Sun at about , the orbital speeds of meteoroids, and the gravity well of Earth. Meteors become visible between about above Earth. They usually disintegrate at altitudes of . Meteors have roughly a fifty percent chance of a daylight (or near daylight) collision with Earth. Most meteors are, however, observed at night, when darkness allows fainter objects to be recognized. For bodies with a size scale larger than to several meters meteor visibility is due to the atmospheric ram pressure (not friction) that heats the meteoroid so that it glows and creates a shining trail of gases and melted meteoroid particles. The gases include vaporised meteoroid material and atmospheric gases that heat up when the meteoroid passes through the atmosphere. Most meteors glow for about a second. History Meteors were not known to be an astronomical phenomenon until early in the nineteenth century. Before that, they were seen in the West as an atmospheric phenomenon, like lightning, and were not connected with strange stories of rocks falling from the sky. In 1807, Yale University chemistry professor Benjamin Silliman investigated a meteorite that fell in Weston, Connecticut. Silliman believed the meteor had a cosmic origin, but meteors did not attract much attention from astronomers until the spectacular meteor storm of November 1833. People all across the eastern United States saw thousands of meteors, radiating from a single point in the sky. Careful observers noticed that the radiant, as the point is called, moved with the stars, staying in the constellation Leo. The astronomer Denison Olmsted extensively studied this storm, concluding that it had a cosmic origin. After reviewing historical records, Heinrich Wilhelm Matthias Olbers predicted the storm's return in 1867, drawing other astronomers' attention to the phenomenon. Hubert A. Newton's more thorough historical work led to a refined prediction of 1866, which proved correct. With Giovanni Schiaparelli's success in connecting the Leonids (as they are called) with comet Tempel-Tuttle, the cosmic origin of meteors was firmly established. Still, they remain an atmospheric phenomenon and retain their name "meteor" from the Greek word for "atmospheric". Fireball A fireball is a brighter-than-usual meteor which can be also seen during daylight. The International Astronomical Union (IAU) defines a fireball as "a meteor brighter than any of the planets" (apparent magnitude −4 or greater). The International Meteor Organization (an amateur organization that studies meteors) has a more rigid definition. It defines a fireball as a meteor that would have at least magnitude of −3 if seen at zenith. This definition corrects for the greater distance between an observer and a meteor near the horizon. For example, a meteor of magnitude −1 at 5 degrees above the horizon would be classified as a fireball because, if the observer had been directly below the meteor, it would have appeared as magnitude −6. Fireballs reaching apparent magnitude −14 or brighter are called bolides. The IAU has no official definition of "bolide", and generally considers the term synonymous with "fireball". Astronomers often use "bolide" to identify an exceptionally bright fireball, particularly one that explodes in a meteor air burst. They are sometimes called detonating fireballs. It may also be used to mean a fireball which creates audible sounds. In the late twentieth century, bolide has also come to mean any object that hits Earth and explodes, with no regard to its composition (asteroid or comet). The word bolide comes from the Greek βολίς (bolis) which can mean a missile or to flash. If the magnitude of a bolide reaches −17 or brighter it is known as a superbolide. A relatively small percentage of fireballs hit Earth's atmosphere and then pass out again: these are termed Earth-grazing fireballs. Such an event happened in broad daylight over North America in 1972. Another rare phenomenon is a meteor procession, where the meteor breaks up into several fireballs traveling nearly parallel to the surface of Earth. A steadily growing number of fireballs are recorded at the American Meteor Society every year. There are probably more than 500,000 fireballs a year, but most go unnoticed because most occur over the ocean and half occur during daytime. A European Fireball Network and a NASA All-sky Fireball Network detect and track many fireballs. Effect on atmosphere The entry of meteoroids into Earth's atmosphere produces three main effects: ionization of atmospheric molecules, dust that the meteoroid sheds, and the sound of passage. During the entry of a meteoroid or asteroid into the upper atmosphere, an ionization trail is created, where the air molecules are ionized by the passage of the meteor. Such ionization trails can last up to 45 minutes at a time. Small, sand-grain sized meteoroids are entering the atmosphere constantly, essentially every few seconds in any given region of the atmosphere, and thus ionization trails can be found in the upper atmosphere more or less continuously. When radio waves are bounced off these trails, it is called meteor burst communications. Meteor radars can measure atmospheric density and winds by measuring the decay rate and Doppler shift of a meteor trail. Most meteoroids burn up when they enter the atmosphere. The left-over debris is called meteoric dust or just meteor dust. Meteor dust particles can persist in the atmosphere for up to several months. These particles might affect climate, both by scattering electromagnetic radiation and by catalyzing chemical reactions in the upper atmosphere. Meteoroids or their fragments achieve dark flight after deceleration to terminal velocity. Dark flight starts when they decelerate to about . Larger fragments fall further down the strewn field. Colours The visible light produced by a meteor may take on various hues, depending on the chemical composition of the meteoroid, and the speed of its movement through the atmosphere. As layers of the meteoroid abrade and ionize, the colour of the light emitted may change according to the layering of minerals. Colours of meteors depend on the relative influence of the metallic content of the meteoroid versus the superheated air plasma, which its passage engenders: Orange-yellow (sodium) Yellow (iron) Blue-green (magnesium) Violet (calcium) Red (atmospheric nitrogen and oxygen) Acoustic manifestations The sound generated by a meteor in the upper atmosphere, such as a sonic boom, typically arrives many seconds after the visual light from a meteor disappears. Occasionally, as with the Leonid meteor shower of 2001, "crackling", "swishing", or "hissing" sounds have been reported, occurring at the same instant as a meteor flare. These are sometimes called electrophonic sounds. Similar sounds have also been reported during intense displays of Earth's auroras. Theories on the generation of these sounds may partially explain them. For example, scientists at NASA suggested that the turbulent ionized wake of a meteor interacts with Earth's magnetic field, generating pulses of radio waves. As the trail dissipates, megawatts of electromagnetic power could be released, with a peak in the power spectrum at audio frequencies. Physical vibrations induced by the electromagnetic impulses would then be heard if they are powerful enough to make grasses, plants, eyeglass frames, the hearer's own body (see microwave auditory effect), and other conductive materials vibrate. This proposed mechanism, although proven plausible by laboratory work, remains unsupported by corresponding measurements in the field. Sound recordings made under controlled conditions in Mongolia in 1998 support the contention that the sounds are real. (Also see Bolide.) Meteor shower A meteor shower is the result of an interaction between a planet, such as Earth, and streams of debris from a comet or other source. The passage of Earth through cosmic debris from comets and other sources is a recurring event in many cases. Comets can produce debris by water vapor drag, as demonstrated by Fred Whipple in 1951, and by breakup. Each time a comet swings by the Sun in its orbit, some of its ice vaporizes and a certain amount of meteoroids are shed. The meteoroids spread out along the entire orbit of the comet to form a meteoroid stream, also known as a "dust trail" (as opposed to a comet's "dust tail" caused by the very small particles that are quickly blown away by solar radiation pressure). The frequency of fireball sightings increases by about 10–30% during the weeks of vernal equinox. Even meteorite falls are more common during the northern hemisphere's spring season. Although this phenomenon has been known for quite some time, the reason behind the anomaly is not fully understood by scientists. Some researchers attribute this to an intrinsic variation in the meteoroid population along Earth's orbit, with a peak in big fireball-producing debris around spring and early summer. Others have pointed out that during this period the ecliptic is (in the northern hemisphere) high in the sky in the late afternoon and early evening. This means that fireball radiants with an asteroidal source are high in the sky (facilitating relatively high rates) at the moment the meteoroids "catch up" with Earth, coming from behind going in the same direction as Earth. This causes relatively low relative speeds and from this low entry speeds, which facilitates survival of meteorites. It also generates high fireball rates in the early evening, increasing chances of eyewitness reports. This explains a part, but perhaps not all of the seasonal variation. Research is in progress for mapping the orbits of the meteors to gain a better understanding of the phenomenon. Notable meteors 1992Peekskill, New York The Peekskill Meteorite was recorded on October 9, 1992 by at least 16 independent videographers. Eyewitness accounts indicate the fireball entry of the Peekskill meteorite started over West Virginia at 23:48 UT (±1 min). The fireball, which traveled in a northeasterly direction, had a pronounced greenish colour, and attained an estimated peak visual magnitude of −13. During a luminous flight time that exceeded 40 seconds the fireball covered a ground path of some . One meteorite recovered at Peekskill, New York, for which the event and object gained their name, had a mass of and was subsequently identified as an H6 monomict breccia meteorite. The video record suggests that the Peekskill meteorite had several companions over a wide area. The companions are unlikely to be recovered in the hilly, wooded terrain in the vicinity of Peekskill. 2009Bone, Indonesia A large fireball was observed in the skies near Bone, Sulawesi, Indonesia on October 8, 2009. This was thought to be caused by an asteroid approximately in diameter. The fireball contained an estimated energy of 50 kilotons of TNT, or about twice the Nagasaki atomic bomb. No injuries were reported. 2009Southwestern US A large bolide was reported on 18 November 2009 over southeastern California, northern Arizona, Utah, Wyoming, Idaho and Colorado. At 00:07 local time a security camera at the high altitude W. L. Eccles Observatory ( above sea level) recorded a movie of the passage of the object to the north. It had a spherical "ghost" image slightly trailing the main object (likely a lens reflection of the intense fireball), and a bright fireball explosion associated with the breakup of a substantial fraction of the object. An object trail continued northward after the fireball. The shock from the final breakup triggered seven seismological stations in northern Utah. The seismic data yielded a terminal location of the object at 40.286 N, −113.191 W, altitude . This is above the Dugway Proving Grounds, a closed Army testing base. 2013Chelyabinsk Oblast, Russia The Chelyabinsk meteor was an extremely bright, exploding fireball, or superbolide, measuring about across, with an estimated mass of 11,000 tonnes as the relatively small asteroid entered Earth's atmosphere. It was the largest natural object known to have entered Earth's atmosphere since the Tunguska event in 1908. Over 1,500 people were injured, mostly by glass from shattered windows caused by the air burst approximately above the environs of Chelyabinsk, Russia on 15 February 2013. An increasingly bright streak was observed during morning daylight with a large contrail lingering behind. At no less than one minute and up to at least three minutes after the object peaked in intensity (depending on distance from trail), a large concussive blast was heard that shattered windows and set-off car alarms, which was followed by a number of smaller explosions. 2019Midwestern United States On November 11, 2019, a meteor was spotted streaking across the skies of the Midwestern United States. In the St. Louis Area, security cameras, dashcams, webcams, and video doorbells captured the object as it burned up in the earth's atmosphere. The superbolide meteor was part of the South Taurids meteor shower. It traveled east to west ending its flight somewhere near Wellsville, Missouri. Monitoring In a range of countries networks of sky observing installations have been set up to monitor meteors. FRIPON North American Meteor Network Desert Fireball Network European Fireball Network Gallery
Physical sciences
Planetary science
Astronomy
20828
https://en.wikipedia.org/wiki/Minivan
Minivan
Minivan (sometimes called simply a van) is a car classification for vehicles designed to transport passengers in the rear seating row(s), with reconfigurable seats in two or three rows . The equivalent classification in Europe is MPV (multi-purpose vehicle), people carrier, or M-segment. Compared with a full-size van, most minivans are based on a passenger car platform and have a lower body. Early models such as the Ford Aerostar and Chevrolet Astro utilized a compact pickup truck platform. Minivans often have a 'one-box' or 'two-box' body configuration, a higher roof, a flat floor, sliding doors for rear passengers, and high H-point seating. The largest size of minivans is also referred to as 'Large MPV' and became popular following the introduction of the 1984 Dodge Caravan and Renault Espace. Typically, these have platforms derived from D-segment passenger cars or compact pickups. Since the 1990s, the smaller compact MPV and mini MPV sizes of minivans have also become popular. Though predecessors to the minivan date back to the 1930s, the contemporary minivan body style was developed concurrently by several companies in the early 1980s, most notably by Chrysler (producer of the Chrysler minivans) and Renault (the Renault Espace), both first sold for model year 1984. Minivans cut into and eventually overshadowed the traditional market of the station wagon and grew in global popularity and diversity throughout the 1990s. Since the 2000s, their reception has varied in different parts of the world: in North America, for example, they have been largely eclipsed by crossovers and SUVs, while in Asia they are commonly marketed as luxury vehicles. Etymology The term minivan originated in North America and the United Kingdom in 1959. In the UK, the Minivan was a small van manufactured by Austin based on the newly introduced Mini car. In the US, the term was used to differentiate the smaller passenger vehicles from full-size vans (such as the Ford E-Series, Dodge Ram Van, and Chevrolet Van), which were then called 'vans'. The first known use of the term was in 1959, but not until the 1980s was it commonly used. Characteristics Chassis In contrast to larger vans, most modern minivans/MPVs use a front-engine, front-wheel drive layout, while some model lines offer all-wheel drive as an option (ie. Toyota Sienna, Toyota Previa, Chrysler Pacifica ). Alongside adopting the form factor introduced by Chrysler minivans, the configuration allows for less engine intrusion and a lower floor in the passenger compartment. In line with larger full-size vans, unibody construction has been commonly used (the spaceframe design of the Renault Espace and the General Motors APV minivans being exceptions). Minivans/MPVs are produced on distinct chassis architecture or share platforms with other vehicles such as sedans and crossover SUVs. Minivans do not have as much ground clearance, towing capacity, or off-road capability compared to SUVs. Minivans provide more space for passengers and cargo than sedans and SUVs. Body style Minivans/MPVs use either a two-box or a one-box body design with A, B, C, and D pillars. The cabin may be fitted with two, three, or four rows of seats, with the most common configurations being 2+3+2 or 2+3+3. Compared to other types of passenger vehicles, the body shape of minivans is designed to maximize interior space for both passengers and cargo. It is achieved by lengthening the wheelbase, creating a flatter floor, taller roof, and more upright side profile, but not as prominent as commercial-oriented vans that are boxier in profile. Practicality and comfort for passengers are also enhanced with a larger rear cargo space opening and larger windows. Some minivans/MPVs may use sliding doors, while others offer conventional forward-hinged doors. Initially, a feature of the 1982 Nissan Prairie, the 1996 Chrysler minivans introduced a driver-side sliding door; by 2002, all minivans were sold with doors on both sides of the body. Most minivans are configured with a rear liftgate; few minivans have used panel-style rear doors, for example, cargo versions of the Chevrolet Astro, Ford Aerostar, and the Mercedes-Benz V-Class. Interior Most minivans have a reconfigurable interior to carry passengers and their effects. The first examples were designed with removable rear seats unlatched from the floor for removal and storage (in line with larger vans); however, users gave poor reception to the design as many seats were heavy and hard to remove. In 1995, the Honda Odyssey was introduced with a third-row seat that folded flat into the floor, which was then adopted by many competitors, including Chrysler that introduced third-row and fold-flat second-row seats in 2005. High-end minivans may include distinguished features such as captain seats or Ottoman seats, as opposed to bench seats for the second row. Predecessors Before the adoption of the minivan term, there is a long history of one-box passenger vehicles roughly approximating the body style, with the 1936 Stout Scarab often cited as the first minivan. The passenger seats in the Scarab were moveable and could be configured for the passengers to sit around a table in the rear of the cabin. Passengers entered and exited the Scarab via a centrally-mounted door. The DKW Schnellaster—manufactured from 1949 until 1962—featured front-wheel drive, a transverse engine, a flat floor, and multi-configurable seating, all of which would later become characteristics of minivans. In 1950, the Volkswagen Type 2 adapted a bus-shaped body to the chassis of a small passenger car (the Volkswagen Beetle). When Volkswagen introduced a sliding side door to the Type 2 in 1968, it then had the prominent features that would later come to define a minivan: compact length, three rows of forward-facing seats, station wagon-style top-hinged tailgate/liftgate, sliding side door, passenger car base. The 1956–1969 Fiat Multipla also had many features in common with modern minivans. The Multipla was based on the chassis of the Fiat 600 and had a rear engine and cab forward layout. The early 1960s saw Ford and Chevrolet introduce "compact" vans for the North American market, the Econoline Club Wagon and Greenbrier respectively. The Ford version was marketed in the Falcon series, the Chevrolet in the Corvair 95 series. The Econoline grew larger in the 1970s, while the Greenbrier was joined by (and later replaced by) the Chevy Van. North America Due to their larger footprint and engines, minivans developed for the North American market are distinct from most minivans/MPVs marketed in other regions, such as Europe and Asia. , average exterior length for minivans in North America ranged around , while many models use V6 engines with more than mainly to fulfill towing capacity requirements which North American customers demand. In 2021, sales of the segment totalled 310,630 units in the U.S. (2.1% of the overall car market), and 33,544 in Canada (2.0% of the overall car market). , the passenger-oriented minivan segment consists of the Toyota Sienna, Chrysler Pacifica, Chrysler Voyager, Honda Odyssey, and Kia Carnival. History 1970s and 1980s In the late 1970s, Chrysler began a development program to design "a small affordable van that looked and handled more like a car." The result of this program was the first American minivans based on the S platform, the 1984 Plymouth Voyager and Dodge Caravan. The S minivans debuted the minivan design features of front-wheel drive, a flat floor and a sliding door for rear passengers. The term minivan came into use largely compared to size to full-size vans; at six feet tall or lower, 1980s minivans were intended to fit inside a typical garage door opening. In 1984, The New York Times described minivans "the hot cars coming out of Detroit," noting that "analysts say the mini-van has created an entirely new market, one that may well overshadow the... station wagon." In response to the popularity of the Voyager/Caravan, General Motors released the 1985 Chevrolet Astro and GMC Safari badge-engineered twins, and Ford released the 1986 Ford Aerostar. These vehicles used a traditional rear-wheel drive layout, unlike the Voyager/Caravan. To match the launch of minivans by American manufacturers, Japanese manufacturers introduced the Toyota Van, Nissan Vanette, and Mitsubishi Delica to North America in 1984, 1986, and 1987, respectively. These vehicles were marketed with the generic "Van" and "Wagon" names (for cargo and passenger vans, respectively). In 1989, the Mazda MPV was released as the first Japanese-brand minivan developed from the ground up specifically for the North American market. Its larger chassis allowed an optional V6 engine and four-wheel drive to fit. In contrast to the sliding doors of American minivans, a hinged passenger-side door was used. A driver-side entry was added for 1996, as Mazda gradually remarketed the model line as an early crossover SUV. By the end of the 1980s, demand for minivans as family vehicles had largely superseded full-size station wagons in the United States. 1990s During the 1990s, the minivan segment underwent several significant changes. Many models switched to the front-wheel drive layout used by the Voyager/Caravan minivans. For example, Ford replaced the Aerostar with the front-wheel drive Mercury Villager for 1993 and the Ford Windstar for 1995. The models also increased in size due to the extended-wheelbase ("Grand") versions of the Voyager and Caravan, launched in 1987. An increase in luxury features and interior equipment was seen in the Eddie Bauer version of the 1988 Ford Aerostar, the 1990 Chrysler Town & Country, and the 1990 Oldsmobile Silhouette. The third-generation Plymouth Voyager, Dodge Caravan, and Chrysler Town & Country – released for the 1996 model year – had an additional sliding door on the driver's side. Following the 1990 discontinuation of the Nissan Vanette in the United States, Nissan also ended the sale of the second-generation Nissan Axxess. Nissan reentered the segment by forming a joint venture with Ford to develop and assemble a minivan that became the Nissan Quest and its Mercury Villager counterpart. Toyota also introduced the Toyota Previa in 1990 to replace the Van/Wagon in North America. It was designed solely as a passenger vehicle sized to compete with American-market minivans. For 1998, the Toyota Sienna became the first Japanese-brand minivan assembled in North America, replacing the Toyota Previa in that market. For 1999, Honda introduced a separate version of the Odyssey for North America, with North America receiving a larger vehicle with sliding doors. 2000s and 2010s The highest selling year for minivans was in 2000, when 1.4 million units were sold. However, in the following years, sales of minivans began to decrease. In 2013, the segment's sales reached approximately 500,000, one-third of its 2000 peak. Market share of minivans in 2019 reached around 2% after a steady decline from 2004, when the segment recorded above 6% of share. It has been suggested that the falling popularity of minivans is due to the increasing popularity of SUVs and crossovers, and its increasingly undesirable image as a vehicle for older drivers or the soccer mom demographics. From 2000 onward, several minivan manufacturers adopted boxier square-based exterior designs and began offering more advanced equipment, including power doors and liftgate; seating that folded flat into the cabin floor; DVD/VCR entertainment systems; in-dash navigation and rear-view camera (both only offered on higher-end trims); and parking sensors. However, the Quest and Sedona only echo these design changes in their third and second respective generations. At the same time, Chrysler introduced fold-flat seating in 2005 (under the trademark “Stow-n’-go”). Mazda's MPV did not feature power doors and was discontinued in 2017. Due to the market decline, North American sales of the Volkswagen Eurovan ceased in 2003. Ford exited the segment in 2006 when the Ford Freestar was canceled, Chrysler discontinued its short-wheelbase minivans in 2007, and General Motors left the market segment in 2009 with the cancellation of the Chevrolet Uplander. However, Volkswagen marketed the Volkswagen Routan (a rebadged Chrysler RT minivans) between 2009 and 2013. In 2010, Ford started importing the commercial-oriented Ford Transit Connect Wagon from Turkey. A similar vehicle, the Mercedes-Benz Metris, entered the North American market in 2016. The Kia Sedona, which was introduced for the 2002 model year, is notable for being the first minivan from a South Korean manufacturer in the region. For 2007, Kia also introduced the three-row Kia Rondo compact MPV, where it was prominently marketed as a crossover due to its small size and the use of hinged rear doors. Another compact MPV released to the market was the Mazda5 in 2004, a three-row vehicle with rear sliding doors. Mazda claimed the model "does not fit into any traditional (North American) segmentation." The Ford C-Max was released for 2013 as a hybrid electric and battery electric compact MPV with sliding doors. However, it did not offer third-row seating in North America. Europe In Europe, the classification is commonly known as "MPV", "people carrier", or officially M-segment, and includes van-based vehicles and smaller vehicles with two-row seating. History 1980s The 1984 Renault Espace was the first European-developed minivan developed primarily for passenger use (as the earlier DKW and Volkswagen used their commercial van platforms in a minibus variant). Beginning development in the 1970s under the European subsidiaries of Chrysler, the Espace was intended as a successor for the Matra Rancho, leading to its use of front-hinged doors. While slow-selling at the time of its release, the Espace would become the most successful European-brand minivan. Initially intending to market the Espace in North America through American Motors Corporation (AMC), the 1987 sale of AMC to Chrysler canceled the plans for Renault to do so. In the late 1980s, Chrysler and Ford commenced sales of American-designed minivans in Europe (categorized as full-size in the region), selling the Chrysler Voyager and Ford Aerostar. General Motors imported the Oldsmobile Silhouette (branded as the Pontiac Trans Sport), later marketing the American-produced Opel/Vauxhall Sintra. 1990s In the 1990s, several joint ventures produced long-running minivan designs. In 1994, badge engineered series of Eurovans was introduced, produced by Sevel Nord and marketed by Citroën, Fiat, Lancia, and Peugeot. The Eurovans were built with two sliding doors; the gearshift was located on the dashboard to increase interior space, and a petal-type handbrake was adopted. In 1995, Ford of Europe and Volkswagen entered a joint venture, producing the Ford Galaxy, SEAT Alhambra, and Volkswagen Sharan minivans, featuring front-hinged rear side doors. In 1996, Mercedes introduced the Mercedes-Benz V-Class as a standard panel van for cargo (called Vito) or with passenger accommodations substituted for part or all of the load area (called V-Class or Viano). In 1998, the Fiat Multipla was released. A two-row, six-seater MPV with a 3+3 seat configuration borrowing its name from an older minivan, it is notable for its highly controversial design. Market reaction to these new full-size MPV models was mixed. Consumers perceived MPVs as large and truck-like despite boasting similar footprints as large sedans. Arguably, cultural reasons regarding vehicle size and high fuel prices were a factor. During 1996 and 1997, the Western European MPV market expanded from around 210,000 units to 350,000 units annually. However, the growth did not continue as expected, resulting in serious plant overcapacity. Renault set a new "compact MPV" standard with the Renault Scénic in 1996, which became popular. Based on the C-segment Mégane platform, it offered the same multi-use and flexibility aspects as the larger MPVs but with a much smaller footprint. 2000s After the success of the Renault Scénic, other makers have developed similar European-focused products, such as the Opel Zafira that offered three-row seating, Citroën Xsara Picasso and others. Asia Japan In Japan, the classification is known as and defined by its three-row seating capacity. Before the birth of minivans with modern form factors, tall wagon-type vehicles with large seating capacity in Japan were known as light vans. They commonly adopted mid-engine, cab over design, and rear-wheel drive layout with one-box form factor. Examples included the Toyota TownAce, Toyota HiAce, Nissan Vanette, Mitsubishi Delica and Mazda Bongo. These vehicles were based on commercial vehicles, which created a gap compared to sedans regarding ride quality and luxury. The Nissan Prairie, released in 1982, is considered the first Japanese compact minivan. Derived closely from a compact sedan, the Prairie was marketed as a "boxy sedan", configured with sliding doors, folding rear seats, and a lifting rear hatch. The Mitsubishi Chariot adopted nearly the same form factor, instead using wagon-style front-hinged doors. In 1990, Toyota introduced the Toyota Estima in Japan, which carried over the mid-engine configuration of the TownAce. Along with its highly rounded exterior, the Estima was distinguished by its nearly panoramic window glass. The Estima was redesigned in 2000, adopting a front-wheel drive layout and offered with a hybrid powertrain since 2001. In 2002, Toyota introduced the Toyota Alphard which was developed as a luxury-oriented model. In 2020, Lexus introduced their first luxury minivan, the Lexus LM, produced with varying degrees of relation with the Toyota Alphard/Vellfire. The LM designation stands for "Luxury Mover". Nissan introduced the Nissan Serena in 1991 and the Nissan Elgrand in 1997. In 1995, Honda entered the minivan segment by introducing the Honda Odyssey. The Odyssey was designed with front-hinged doors and as derived from the Honda Accord. It came with advantages such as sedan-like driving dynamics and a lower floor to allow for easy access. In a design feature that would become widely adopted by other manufacturers, the Odyssey introduced a rear seat that folded flat into the floor (replacing a removable rear seat). The Odyssey evolved into a low-roof, estate-like minivan until 2013, when it adopted a high-roof body with rear sliding doors. Honda also produced the Honda Stepwgn mid-size MPV since 1996, which is designed with a higher cabin and narrow width, and the Honda Stream since 2000 to slot below the Odyssey. In 2020, minivans made up 20.8% of total automobile sales in Japan, behind SUVs and compact hatchbacks, making it one of the largest minivan markets in the world. South Korea In South Korea, both the terms "minivan" and "MPV" are used. The Kia Carnival (also sold the Kia Sedona) was introduced in 1998 with dual sliding doors. Sharing its configuration with the Honda Odyssey, the Hyundai Trajet was sold from 1999 to 2008. Introduced in 2004, the SsangYong Rodius is the highest-capacity minivan, seating up to 11 passengers. It was discontinued in 2019. Current minivans marketed in South Korea are the Kia Carnival and Hyundai Staria, along with imported options such as the Toyota Sienna (originally for North America) and later generations of Honda Odyssey. China In 1999, Shanghai GM commenced production of the Buick GL8 minivan, derived from a minivan platform designed by GM in the United States. After two generations of production, the GL8 is the final minivan produced by General Motors or its joint ventures today. It remained dominant in the high-end minivan segment of the market. Sales of minivans in China increased rapidly in 2015 and 2016 when the Chinese government lifted the one-child policy in favor of the two-child policy, which pushed customer preference toward three-row vehicles in anticipation of a larger family. In 2016, 2,497,543 minivans were sold in China, a major increase from 2012, which recorded 936,232 sales. However, sales volume has shrunk ever since, with only 1,082,028 minivans sold in the domestic market in 2021 (4.1% of the total car market), around 720,000 of which were sold by domestic manufacturers. In August 2022, Zeekr introduced the Zeekr 009 electric minivan, deliveries of the 009 began in the first quarter of 2023 in China. Indonesia The MPV segment is the most popular passenger car segment in Indonesia, with a market share of 40 percent in 2021. India The category is commonly known as multi utility vehicle (MUV) or MPV. In fiscal year 2020, the sales volume of the segment totaled 283,583 vehicles, or 10.3% of industry total. Luxury MPV Manufacturers such as Mercedes-Benz, Toyota, Lexus, Buick, Hongqi, Zeekr and Volvo have marketed upscale MPVs as luxury vehicles, mainly marketed for several Asian markets. Luxury MPVs generally have 3-rows of six or seven seats; however, range-topping flagship models may also offer a 2-rows option with four seats, which typically have more features than their cheaper counterparts. By the early 2020s, manufacturers have found additional strategies to improve technologies, such as new materials, new systems, and improving exteriors. Examples of luxury MPV models include Mercedes-Benz V-Class, Lexus LM, Buick GL8, Hongqi HQ9, Toyota Alphard, Volvo EM90 and the Zeekr 009. Size categories Mini MPV Mini MPV – an abbreviation for Mini Multi-Purpose Vehicle – is a vehicle size class for the smallest size of minivans (MPVs). The Mini MPV size class sits below the compact MPV size class, and the vehicles are often built on the platforms of B-segment hatchback models. Several minivans based on B-segment platforms have been marketed as 'leisure activity vehicles' in Europe. These include the Fiat Fiorino and Ford Transit Courier. Examples: Compact MPV Compact MPV – an abbreviation for Compact Multi-Purpose Vehicle – is a vehicle size class for the middle size of MPVs/minivans. The Compact MPV size class sits between the mini MPV and minivan size classes. Compact MPVs remain predominantly European, although they are also built and sold in many Latin American and Asian markets. As of 2016, the only compact MPV sold widely in the United States was the Ford C-Max. Examples: Related categories Leisure activity vehicle A leisure activity vehicle (abbreviated LAV), also known as van-based MPV and ludospace in French, is the passenger-oriented version of small commercial vans primarily marketed in Europe. One of the first LAVs was the 1977 Matra Rancho (among the first crossover SUVs and a precursor to the Renault Espace), with European manufacturers expanding the segment in the late 1990s, following the introduction of the Citroën Berlingo and Renault Kangoo. Leisure activity vehicles are typically derived from supermini or subcompact car platforms, differing from mini MPVs in body design. To maximize interior space, LAVs feature a taller roof, more upright windshield, and longer hood/bonnet with either a liftgate or barn doors to access the boot. Marketed as an alternative to sedan-derived small family cars, LAVs have seating with a lower H-point than MPVs or minivans, offering two (or three) rows of seating. Though sharing underpinnings with superminis, subcompacts, and mini MPVs, using an extended wheelbase can make leisure activity vehicles longer than those from which they are derived. For example, the Fiat Doblò is one of the longest LAVs with a total length of , versus the of the Opel Meriva (a mini MPV) and the of the Peugeot 206 SW (a supermini). Asian utility vehicle An Asian utility vehicle (abbreviated AUV) originates from the Philippines to describe basic and affordable vehicles with either large seating capacity or cargo designed to be sold in developing countries. These vehicles are usually available in minivan-like wagon body style with a seating capacity of 7 to 16 passengers. They are usually based on a compact pickup truck with body-on-frame chassis and rear-wheel drive to maximize its load capacity and durability while maintaining low manufacturing costs. Until the 2000s, AUVs were popular in Southeast Asia, particularly in Indonesia, the Philippines, Taiwan, and some African markets. The first AUV is the Toyota Tamaraw/Kijang, introduced in the Philippines and Indonesia in 1975 as a pickup truck with an optional rear cabin. In the 1990s, other vehicles such as the Isuzu Panther/Hi-Lander/Crosswind and Mitsubishi Freeca/Adventure/Kuda emerged in the AUV segment. Modern equivalent of AUV is the Toyota Innova, an MPV that is the direct successor to the Kijang which in its first two generations were built with body-on-frame construction. The vehicle's third generation switched to unibody construction. Three-row SUV With the decline of the minivan/MPV category in many regions, such as North America and Europe, in the mid-2010s, SUVs and crossovers with three rows of seating became popular alternatives. Compared to minivans, three-row SUVs lose sliding doors and generally offer less interior space due to the higher priorities on exterior styling and higher ground clearance.
Technology
Motorized road transport
null
20845
https://en.wikipedia.org/wiki/Multiplication
Multiplication
Multiplication (often denoted by the cross symbol , by the mid-line dot operator , by juxtaposition, or, on computers, by an asterisk ) is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction, and division. The result of a multiplication operation is called a product. The multiplication of whole numbers may be thought of as repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, the multiplicand, as the quantity of the other one, the multiplier; both numbers can be referred to as factors. For example, the expression , phrased as "3 times 4" or "3 multiplied by 4", can be evaluated by adding 3 copies of 4 together: Here, 3 (the multiplier) and 4 (the multiplicand) are the factors, and 12 is the product. One of the main properties of multiplication is the commutative property, which states in this case that adding 3 copies of 4 gives the same result as adding 4 copies of 3: Thus, the designation of multiplier and multiplicand does not affect the result of the multiplication. Systematic generalizations of this basic definition define the multiplication of integers (including negative numbers), rational numbers (fractions), and real numbers. Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have some given lengths. The area of a rectangle does not depend on which side is measured first—a consequence of the commutative property. The product of two measurements (or physical quantities) is a new type of measurement, usually with a derived unit. For example, multiplying the lengths (in meters or feet) of the two sides of a rectangle gives its area (in square meters or square feet). Such a product is the subject of dimensional analysis. The inverse operation of multiplication is division. For example, since 4 multiplied by 3 equals 12, 12 divided by 3 equals 4. Indeed, multiplication by 3, followed by division by 3, yields the original number. The division of a number other than 0 by itself equals 1. Several mathematical concepts expand upon the fundamental idea of multiplication. The product of a sequence, vector multiplication, complex numbers, and matrices are all examples where this can be seen. These more advanced constructs tend to affect the basic properties in their own ways, such as becoming noncommutative in matrices and some forms of vector multiplication or changing the sign of complex numbers. Notation In arithmetic, multiplication is often written using the multiplication sign (either or ) between the terms (that is, in infix notation). For example, ("two times three equals six") There are other mathematical notations for multiplication: To reduce confusion between the multiplication sign × and the common variable , multiplication is also denoted by dot signs, usually a middle-position dot (rarely period): . The middle dot notation or dot operator, encoded in Unicode as , is now standard in the United States and other countries . When the dot operator character is not accessible, the interpunct (·) is used. In other countries that use a comma as a decimal mark, either the period or a middle dot is used for multiplication. Historically, in the United Kingdom and Ireland, the middle dot was sometimes used for the decimal to prevent it from disappearing in the ruled line, and the period/full stop was used for multiplication. However, since the Ministry of Technology ruled to use the period as the decimal point in 1968, and the International System of Units (SI) standard has since been widely adopted, this usage is now found only in the more traditional journals such as The Lancet. In algebra, multiplication involving variables is often written as a juxtaposition (e.g., for times or for five times ), also called implied multiplication. The notation can also be used for quantities that are surrounded by parentheses (e.g., , or for five times two). This implicit usage of multiplication can cause ambiguity when the concatenated variables happen to match the name of another variable, when a variable name in front of a parenthesis can be confused with a function name, or in the correct determination of the order of operations. In vector multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a cross product of two vectors, yielding a vector as its result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk (as in 5*2) is still the most common notation. This is because most computers historically were limited to small character sets (such as ASCII and EBCDIC) that lacked a multiplication sign (such as ⋅ or ×), while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language. The numbers to be multiplied are generally called the "factors" (as in factorization). The number to be multiplied is the "multiplicand", and the number by which it is multiplied is the "multiplier". Usually, the multiplier is placed first, and the multiplicand is placed second; however, sometimes the first factor is considered the multiplicand and the second the multiplier. Also, as the result of multiplication does not depend on the order of the factors, the distinction between "multiplicand" and "multiplier" is useful only at a very elementary level and in some multiplication algorithms, such as the long multiplication. Therefore, in some sources, the term "multiplicand" is regarded as a synonym for "factor". In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in ) is called a coefficient. The result of a multiplication is called a product. When one factor is an integer, the product is a multiple of the other or of the product of the others. Thus, is a multiple of , as is . A product of integers is a multiple of each factor; for example, 15 is the product of 3 and 5 and is both a multiple of 3 and a multiple of 5. Definitions The product of two numbers or the multiplication between two numbers can be defined for common special cases: natural numbers, integers, rational numbers, real numbers, complex numbers, and quaternions. Product of two natural numbers The product of two natural numbers is defined as: Product of two integers An integer can be either zero, a nonzero natural number, or minus a nonzero natural number. The product of zero and another integer is always zero. The product of two nonzero integers is determined by the product of their positive amounts, combined with the sign derived from the following rule: (This rule is a consequence of the distributivity of multiplication over addition, and is not an additional rule.) In words: A positive number multiplied by a positive number is positive (product of natural numbers), A positive number multiplied by a negative number is negative, A negative number multiplied by a positive number is negative, A negative number multiplied by a negative number is positive. Product of two fractions Two fractions can be multiplied by multiplying their numerators and denominators: which is defined when . Product of two real numbers There are several equivalent ways to define formally the real numbers; see Construction of the real numbers. The definition of multiplication is a part of all these definitions. A fundamental aspect of these definitions is that every real number can be approximated to any accuracy by rational numbers. A standard way for expressing this is that every real number is the least upper bound of a set of rational numbers. In particular, every positive real number is the least upper bound of the truncations of its infinite decimal representation; for example, is the least upper bound of A fundamental property of real numbers is that rational approximations are compatible with arithmetic operations, and, in particular, with multiplication. This means that, if and are positive real numbers such that and then In particular, the product of two positive real numbers is the least upper bound of the term-by-term products of the sequences of their decimal representations. As changing the signs transforms least upper bounds into greatest lower bounds, the simplest way to deal with a multiplication involving one or two negative numbers, is to use the rule of signs described above in . The construction of the real numbers through Cauchy sequences is often preferred in order to avoid consideration of the four possible sign configurations. Product of two complex numbers Two complex numbers can be multiplied by the distributive law and the fact that , as follows: The geometric meaning of complex multiplication can be understood by rewriting complex numbers in polar coordinates: Furthermore, from which one obtains The geometric meaning is that the magnitudes are multiplied and the arguments are added. Product of two quaternions The product of two quaternions can be found in the article on quaternions. Note, in this case, that and are in general different. Computation Many common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9). However, one method, the peasant multiplication algorithm, does not. The example below illustrates "long multiplication" (the "standard algorithm", "grade-school multiplication"): 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390 ) In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Multiplying numbers to more than a couple of decimal places by hand is tedious and error-prone. Common logarithms were invented to simplify such calculations, since adding logarithms is equivalent to multiplying. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early 20th century, mechanical calculators, such as the Marchant, automated multiplication of up to 10-digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand. Historical algorithms Methods of multiplication were documented in the writings of ancient Egyptian, and Chinese civilizations. The Ishango bone, dated to about 18,000 to 20,000 BC, may hint at a knowledge of multiplication in the Upper Paleolithic era in Central Africa, but this is speculative. Egyptians The Egyptian method of multiplication of integers and fractions, which is documented in the Rhind Mathematical Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining , , . The full product could then be found by adding the appropriate terms found in the doubling sequence: 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273. Babylonians The Babylonians used a sexagesimal positional number system, analogous to the modern-day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. Chinese In the mathematical text Zhoubi Suanjing, dated prior to 300 BC, and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed Rod calculus involving place value addition, subtraction, multiplication, and division. The Chinese were already using a decimal multiplication table by the end of the Warring States period. Modern methods The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication, and division. Henry Burchard Fine, then a professor of mathematics at Princeton University, wrote the following: The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously. These place value decimal arithmetic algorithms were introduced to Arab countries by Al Khwarizmi in the early 9th century and popularized in the Western world by Fibonacci in the 13th century. Grid method Grid method multiplication, or the box method, is used in primary schools in England and Wales and in some areas of the United States to help teach an understanding of how multiple digit multiplication works. An example of multiplying 34 by 13 would be to lay the numbers out in a grid as follows: {| class="wikitable" style="text-align: center;" ! scope="col" | × ! scope="col" | 30 ! scope="col" | 4 |- ! scope="row" | 10 |300 |40 |- ! scope="row" | 3 |90 |12 |} and then add the entries. Computer algorithms The classical method of multiplying two -digit numbers requires digit multiplications. Multiplication algorithms have been designed that reduce the computation time considerably when multiplying large numbers. Methods based on the discrete Fourier transform reduce the computational complexity to . In 2016, the factor was replaced by a function that increases much slower, though still not constant. In March 2019, David Harvey and Joris van der Hoeven submitted a paper presenting an integer multiplication algorithm with a complexity of The algorithm, also based on the fast Fourier transform, is conjectured to be asymptotically optimal. The algorithm is not practically useful, as it only becomes faster for multiplying extremely large numbers (having more than bits). Products of measurements One can only meaningfully add or subtract quantities of the same type, but quantities of different types can be multiplied or divided without problems. For example, four bags with three marbles each can be thought of as: [4 bags] × [3 marbles per bag] = 12 marbles. When two measurements are multiplied together, the product is of a type depending on the types of measurements. The general theory is given by dimensional analysis. This analysis is routinely applied in physics, but it also has applications in finance and other applied fields. A common example in physics is the fact that multiplying speed by time gives distance. For example: 50 kilometers per hour × 3 hours = 150 kilometers. In this case, the hour units cancel out, leaving the product with only kilometer units. Other examples of multiplication involving units include: 2.5 meters × 4.5 meters = 11.25 square meters 11 meters/seconds × 9 seconds = 99 meters 4.5 residents per house × 20 houses = 90 residents Product of a sequence Capital pi notation The product of a sequence of factors can be written with the product symbol , which derives from the capital letter Π (pi) in the Greek alphabet (much like the same way the summation symbol is derived from the Greek letter Σ (sigma)). The meaning of this notation is given by which results in In such a notation, the variable represents a varying integer, called the multiplication index, that runs from the lower value indicated in the subscript to the upper value given by the superscript. The product is obtained by multiplying together all factors obtained by substituting the multiplication index for an integer between the lower and the upper values (the bounds included) in the expression that follows the product operator. More generally, the notation is defined as where m and n are integers or expressions that evaluate to integers. In the case where , the value of the product is the same as that of the single factor xm; if , the product is an empty product whose value is 1—regardless of the expression for the factors. Properties of capital pi notation By definition, If all factors are identical, a product of factors is equivalent to exponentiation: Associativity and commutativity of multiplication imply and if is a non-negative integer, or if all are positive real numbers, and if all are non-negative integers, or if is a positive real number. Infinite products One may also consider products of infinitely many terms; these are called infinite products. Notationally, this consists in replacing n above by the infinity symbol ∞. The product of such an infinite sequence is defined as the limit of the product of the first n terms, as n grows without bound. That is, One can similarly replace m with negative infinity, and define: provided both limits exist. Exponentiation When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times the base appears in the expression, so that the expression indicates that n copies of the base a are to be multiplied together. This notation can be used whenever multiplication is known to be power associative. Properties For real and complex numbers, which includes, for example, natural numbers, integers, and fractions, multiplication has certain properties: Commutative property The order in which two numbers are multiplied does not matter: Associative property Expressions solely involving multiplication or addition are invariant with respect to the order of operations: Distributive property Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions: Identity element The multiplicative identity is 1; anything multiplied by 1 is itself. This feature of 1 is known as the identity property: Property of 0 Any number multiplied by 0 is 0. This is known as the zero property of multiplication: Negation −1 times any number is equal to the additive inverse of that number: , where −1 times −1 is 1: Inverse element Every number x, except 0, has a multiplicative inverse, , such that . Order preservation Multiplication by a positive number preserves the order: For , if then . Multiplication by a negative number reverses the order: For , if then . The complex numbers do not have an ordering that is compatible with both addition and multiplication. Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. Hurwitz's theorem shows that for the hypercomplex numbers of dimension 8 or greater, including the octonions, sedenions, and trigintaduonions, multiplication is generally not associative. Axioms In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication: Here S(y) represents the successor of y; i.e., the natural number that follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic, including induction. For instance, S(0), denoted by 1, is a multiplicative identity because The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is The rule that −1 × −1 = 1 can then be deduced from Multiplication is extended in a similar way to rational numbers and then to real numbers. Multiplication with set theory The product of non-negative integers can be defined with set theory using cardinal numbers or the Peano axioms. See below how to extend this to multiplying arbitrary integers, and then arbitrary rational numbers. The product of real numbers is defined in terms of products of rational numbers; see construction of the real numbers. Multiplication in group theory There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. A simple example is the set of non-zero rational numbers. Here identity 1 is had, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, zero must be excluded because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example, an abelian group is had, but that is not always the case. To see this, consider the set of invertible square matrices of a given dimension over a given field. Here, it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, which shows that this group is non-abelian. Another fact worth noticing is that the integers under multiplication do not form a group—even if zero is excluded. This is easily seen by the nonexistence of an inverse for all elements other than 1 and −1. Multiplication in group theory is typically notated either by a dot or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated as a b or ab. When referring to a group via the indication of the set and operation, the dot is used. For example, our first example could be indicated by . Multiplication of different kinds of numbers Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). Integers is the sum of N copies of M when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by and The same sign rules apply to rational and real numbers. Rational numbers Generalization to fractions is by multiplying the numerators and denominators, respectively: . This gives the area of a rectangle high and wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers. Real numbers Real numbers and their products can be defined in terms of sequences of rational numbers. Complex numbers Considering complex numbers and as ordered pairs of real numbers and , the product is . This is the same as for reals when the imaginary parts and are zero. Equivalently, denoting as , Alternatively, in trigonometric form, if , then Further generalizations See Multiplication in group theory, above, and multiplicative group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring that is not any of the above number systems is a polynomial ring (polynomials can be added and multiplied, but polynomials are not numbers in any usual sense). Division Often division, , is the same as multiplication by an inverse, . Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "" but may be defined. In a division ring there are inverses, but may be ambiguous in non-commutative rings since need not be the same as . Calculator Enter two numbers to find their product: × =
Mathematics
Basics
null
20857
https://en.wikipedia.org/wiki/Masonry
Masonry
Masonry is the craft of building a structure with brick, stone, or similar material, including mortar plastering which are often laid in, bound, and pasted together by mortar. The term masonry can also refer to the building units (stone, brick, etc.) themselves. The common materials of masonry construction are bricks and building stone, rocks such as marble, granite, and limestone, cast stone, concrete blocks, glass blocks, and adobe. Masonry is generally a highly durable form of construction. However, the materials used, the quality of the mortar and workmanship, and the pattern in which the units are assembled can substantially affect the durability of the overall masonry construction. A person who constructs masonry is called a mason or bricklayer. These are both classified as construction trades. History Masonry is one of the oldest building crafts in the world. The construction of Egyptian pyramids, Roman aqueducts, and medieval cathedrals are all examples of masonry. Early structures used the weight of the masonry itself to stabilize the structure against lateral movements. The types and techniques of masonry used evolved with architectural needs and cultural norms. Since mid-20th century, masonry has often featured steel-reinforced elements to help carry the tension force present in modern thin, light, tall building systems. Applications Masonry has both structural and non-structural applications. Structural applications include walls, columns, beams, foundations, load-bearing arches, and others. On the other hand, masonry is also used in non-structural applications such as fireplaces chimneys and veneer systems. Brick and concrete block are the most common types of masonry in use in industrialized nations and may be either load-bearing or non-load-bearing. Concrete blocks, especially those with hollow cores, offer various possibilities in masonry construction. They generally provide great compressive strength and are best suited to structures with light transverse loading when the cores remain unfilled. Filling some or all of the cores with concrete or concrete with steel reinforcement (typically rebar) offers much greater tensile and lateral strength to structures. Advantages The use of materials such as bricks and stones can increase the thermal mass of a building. Masonry is a non-combustible product and can protect the building from fire. Masonry walls are more resistant to projectiles, such as debris from hurricanes or tornadoes. Disadvantages Extreme weather, under certain circumstances, can cause degradation of masonry due to expansion and contraction forces associated with freeze-thaw cycles. Masonry tends to be heavy and must be built on a stable ground made of either undisturbed or mechanically-compacted soil, otherwise cracking may occur. Unlike concrete, masonry construction does not lend itself well to mechanization, and requires more skilled labor. Structural limitations One problem with masonry walls is that they rely mainly on their weight to keep them in place; each block or brick is only loosely connected to the next via a thin layer of mortar. This is why they do not perform well in earthquakes, when entire buildings are shaken horizontally. Many collapses during earthquakes occur in buildings that have load-bearing masonry walls. Besides, heavier buildings having masonry suffer more damage. Dry set masonry The strength of a masonry wall is not entirely dependent on the bond between the building material and the mortar; the friction between the interlocking blocks of masonry is often strong enough to provide a great deal of strength on its own. The blocks sometimes have grooves or other surface features added to enhance this interlocking, and some dry set masonry structures forgo mortar altogether. Stonework Stone blocks used in masonry can be dressed or rough, though in both examples corners, door and window jambs, and similar areas are usually dressed. Stonemasonry utilizing dressed stones is known as ashlar masonry, whereas masonry using irregularly shaped stones is known as rubble masonry. Both rubble and ashlar masonry can be laid in coursed rows of even height through the careful selection or cutting of stones, but a great deal of stone masonry is uncoursed. Slipform stonemasonry produces a hybrid wall of reinforced concrete with a rubble stone face. Natural stone veneers over CMU, cast-in-place, or tilt-up concrete walls are widely used to give the appearance of stone masonry. Sometimes river rock of smooth oval-shaped stones is used as a veneer. This type of material is not favored for solid masonry as it requires a great amount of mortar and can lack intrinsic structural strength. Manufactured-stone, or cultured stone, veneers are popular alternatives to natural stones. Manufactured-stone veneers are typically made from concrete. Natural stones from quarries around the world are sampled and recreated using molds, aggregate, and colorfast pigments. To the casual observer there may be no visual difference between veneers of natural and manufactured stone. Brick Solid brickwork is made of two or more wythes of bricks with the units running horizontally (called stretcher bricks) bound together with bricks running transverse to the wall (called "header" bricks). Each row of bricks is known as a course. The pattern of headers and stretchers employed gives rise to different 'bonds' such as the common bond (with every sixth course composed of headers), the English bond, and the Flemish bond (with alternating stretcher and header bricks present on every course). Bonds can differ in strength and in insulating ability. Vertically staggered bonds tend to be somewhat stronger and less prone to major cracking than a non-staggered bond. Uniformity and rusticity The wide selection of brick styles and types generally available in industrialized nations allow much variety in the appearance of the final product. In buildings built during the 1950s-1970s, a high degree of uniformity of brick and accuracy in masonry was typical. In the period since then this style was thought to be too sterile, so attempts were made to emulate older, rougher work. Some brick surfaces are made to look particularly rustic by including burnt bricks, which have a darker color or an irregular shape. Others may use antique salvage bricks, or new bricks may be artificially aged by applying various surface treatments, such as tumbling. The attempts at rusticity of the late 20th century have been carried forward by masons specializing in a free, artistic style, where the courses are intentionally not straight, instead weaving to form more organic impressions. Serpentine masonry A crinkle-crankle wall is a brick wall that follows a serpentine path, rather than a straight line. This type of wall is more resistant to toppling than a straight wall; so much so that it may be made of a single wythe of unreinforced brick and so despite its longer length may be more economical than a straight wall. Concrete block Blocks of cinder concrete (cinder blocks or breezeblocks), ordinary concrete (concrete blocks), or hollow tile are generically known as Concrete Masonry Units (CMUs). They usually are much larger than ordinary bricks and so are much faster to lay for a wall of a given size. Furthermore, cinder and concrete blocks typically have much lower water absorption rates than brick. They often are used as the structural core for veneered brick masonry or are used alone for the walls of factories, garages, and other industrial-style buildings where such appearance is acceptable or desirable. Such blocks often receive a stucco surface for decoration. Surface-bonding cement, which contains synthetic fibers for reinforcement, is sometimes used in this application and can impart extra strength to a block wall. Surface-bonding cement is often pre-colored and can be stained or painted thus resulting in a finished stucco-like surface. The primary structural advantage of concrete blocks in comparison to smaller clay-based bricks is that a CMU wall can be reinforced by filling the block voids with concrete with or without steel rebar. Generally, certain voids are designated for filling and reinforcement, particularly at corners, wall-ends, and openings while other voids are left empty. This increases wall strength and stability more economically than filling and reinforcing all voids. Typically, structures made of CMUs will have the top course of blocks in the walls filled with concrete and tied together with steel reinforcement to form a bond beam. Bond beams are often a requirement of modern building codes and controls. Another type of steel reinforcement referred to as ladder-reinforcement, can also be embedded in horizontal mortar joints of concrete block walls. The introduction of steel reinforcement generally results in a CMU wall having much greater lateral and tensile strength than unreinforced walls. "Architectural masonry is the evolvement of standard concrete masonry blocks into aesthetically pleasing concrete masonry units (CMUs)". CMUs can be manufactured to provide a variety of surface appearances. They can be colored during manufacturing or stained or painted after installation. They can be split as part of the manufacturing process, giving the blocks a rough face replicating the appearance of natural stone, such as brownstone. CMUs may also be scored, ribbed, sandblasted, polished, striated (raked or brushed), include decorative aggregates, be allowed to slump in a controlled fashion during curing, or include several of these techniques in their manufacture to provide a decorative appearance. "Glazed concrete masonry units are manufactured by bonding a permanent colored facing (typically composed of polyester resins, silica sand and various other chemicals) to a concrete masonry unit, providing a smooth impervious surface." Glass block or glass brick are blocks made from glass and provide a translucent to clear vision through the block. Veneer masonry A masonry veneer wall consists of masonry units, usually clay-based bricks, installed on one or both sides of a structurally independent wall usually constructed of wood or masonry. In this context, the brick masonry is primarily decorative, not structural. The brick veneer is generally connected to the structural wall by brick ties (metal strips that are attached to the structural wall, as well as the mortar joints of the brick veneer). There is typically an air gap between the brick veneer and the structural wall. As clay-based brick is usually not completely waterproof, the structural wall will often have a water-resistant surface (usually tar paper) and weep holes can be left at the base of the brick veneer to drain moisture that accumulates inside the air gap. Concrete blocks, real and cultured stones, and veneer adobe are sometimes used in a very similar veneer fashion. Most insulated buildings that use concrete block, brick, adobe, stone, veneers or some combination thereof feature interior insulation in the form of fiberglass batts between wooden wall studs or in the form of rigid insulation boards covered with plaster or drywall. In most climates this insulation is much more effective on the exterior of the wall, allowing the building interior to take advantage of the aforementioned thermal mass of the masonry. This technique does, however, require some sort of weather-resistant exterior surface over the insulation and, consequently, is generally more expensive. Gabions Gabions are baskets, usually now of zinc-protected steel (galvanized steel) that are filled with fractured stone of medium size. These will act as a single unit and are stacked with setbacks to form a revetment or retaining wall. They have the advantage of being well drained, flexible, and resistant to flood, water flow from above, frost damage, and soil flow. Their expected useful life is only as long as the wire they are composed of and if used in severe climates (such as shore-side in a salt water environment) must be made of appropriate corrosion-resistant wire. Most modern gabions are rectangular. Earlier gabions were often cylindrical wicker baskets, open at both ends, used usually for temporary, often military, construction. Similar work can be done with finer aggregates using cellular confinement. Passive fire protection (PFP) Masonry walls have an endothermic effect of its hydrates, as in chemically bound water, unbound moisture from the concrete block, and the poured concrete if the hollow cores inside the blocks are filled. Masonry can withstand temperatures up to and it can withstand direct exposure to fire for up to 4 hours. In addition to that, concrete masonry keeps fires contained to their room of origin 93% of the time. For those reasons, concrete and masonry units hold the highest flame spread index classification, Class A. Fire cuts can be used to increase safety and reduce fire damage to masonry buildings. Mechanical modeling of masonry structures From the point of view of material modeling, masonry is a special material of extreme mechanical properties (with a very high ratio between strength in compression and in tension), so that the applied loads do not diffuse as they do in elastic bodies, but tend to percolate along lines of high stiffness.
Technology
Materials
null
20861
https://en.wikipedia.org/wiki/Cavity%20magnetron
Cavity magnetron
The cavity magnetron is a high-power vacuum tube used in early radar systems and subsequently in microwave ovens and in linear particle accelerators. A cavity magnetron generates microwaves using the interaction of a stream of electrons with a magnetic field, while moving past a series of cavity resonators, which are small, open cavities in a metal block. Electrons pass by the cavities and cause microwaves to oscillate within, similar to the functioning of a whistle producing a tone when excited by an air stream blown past its opening. The resonant frequency of the arrangement is determined by the cavities' physical dimensions. Unlike other vacuum tubes, such as a klystron or a traveling-wave tube (TWT), the magnetron cannot function as an amplifier for increasing the intensity of an applied microwave signal; the magnetron serves solely as an electronic oscillator generating a microwave signal from direct current electricity supplied to the vacuum tube. The use of magnetic fields as a means to control the flow of an electric current was spurred by the invention of the Audion by Lee de Forest in 1906. Albert Hull of General Electric Research Laboratory, USA, began development of magnetrons to avoid de Forest's patents, but these were never completely successful. Other experimenters picked up on Hull's work and a key advance, the use of two cathodes, was introduced by Habann in Germany in 1924. Further research was limited until Okabe's 1929 Japanese paper noting the production of centimeter-wavelength signals, which led to worldwide interest. The development of magnetrons with multiple cathodes was proposed by A. L. Samuel of Bell Telephone Laboratories in 1934, leading to designs by Postumus in 1934 and Hans Hollmann in 1935. Production was taken up by Philips, General Electric Company (GEC), Telefunken and others, limited to perhaps 10 W output. By this time the klystron was producing more power and the magnetron was not widely used, although a 300 W device was built by Aleksereff and Malearoff in the USSR in 1936 (published in 1940). The cavity magnetron was a radical improvement introduced by John Randall and Harry Boot at the University of Birmingham, England in 1940. Their first working example produced hundreds of watts at 10 cm wavelength, an unprecedented achievement. Within weeks, engineers at GEC had improved this to well over a kilowatt (kW), and within months 25 kW, over 100 kW by 1941 and pushing towards a megawatt by 1943. The high power pulses were generated from a device the size of a small book and transmitted from an antenna only centimeters long, reducing the size of practical radar systems by orders of magnitude. New radars appeared for night-fighters, anti-submarine aircraft and even the smallest escort ships, and from that point on the Allies of World War II held a lead in radar that their counterparts in Germany and Japan were never able to close. By the end of the war, practically every Allied radar was based on the magnetron. The magnetron continued to be used in radar in the post-war period but fell from favour in the 1960s as high-power klystrons and traveling-wave tubes emerged. A key characteristic of the magnetron is that its output signal changes from pulse to pulse, both in frequency and phase. This renders it less suitable for pulse-to-pulse comparisons for performing moving target indication and removing "clutter" from the radar display. The magnetron remains in use in some radar systems, but has become much more common as a low-cost source for microwave ovens. In this form, over one billion magnetrons are in use today.<ref>Ma, L. "3D Computer Modeling of Magnetrons ." University of London Ph.D. Thesis. December 2004. Accessed 2009-08-23.</ref> Construction and operation Conventional tube design In a conventional electron tube (vacuum tube), electrons are emitted from a negatively charged, heated component called the cathode and are attracted to a positively charged component called the anode. The components are normally arranged concentrically, placed within a tubular-shaped container from which all air has been evacuated, so that the electrons can move freely (hence the name "vacuum" tubes, called "valves" in British English). If a third electrode (called a control grid) is inserted between the cathode and the anode, the flow of electrons between the cathode and anode can be regulated by varying the voltage on this third electrode. This allows the resulting electron tube (called a "triode" because it now has three electrodes) to function as an amplifier because small variations in the electric charge applied to the control grid will result in identical variations in the much larger current of electrons flowing between the cathode and anode. Hull or single-anode magnetron The idea of using a grid for control was invented by Philipp Lenard, who received the Nobel Prize for Physics in 1905. In the USA it was later patented by Lee de Forest, resulting in considerable research into alternate tube designs that would avoid his patents. One concept used a magnetic field instead of an electrical charge to control current flow, leading to the development of the magnetron tube. In this design, the tube was made with two electrodes, typically with the cathode in the form of a metal rod in the center, and the anode as a cylinder around it. The tube was placed between the poles of a horseshoe magnet arranged such that the magnetic field was aligned parallel to the axis of the electrodes. With no magnetic field present, the tube operates as a diode, with electrons flowing directly from the cathode to the anode. In the presence of the magnetic field, the electrons will experience a force at right angles to their direction of motion (the Lorentz force). In this case, the electrons follow a curved path between the cathode and anode. The curvature of the path can be controlled by varying either the magnetic field using an electromagnet, or by changing the electrical potential between the electrodes. At very high magnetic field settings the electrons are forced back onto the cathode, preventing current flow. At the opposite extreme, with no field, the electrons are free to flow straight from the cathode to the anode. There is a point between the two extremes, the critical value or Hull cut-off magnetic field (and cut-off voltage), where the electrons just reach the anode. At fields around this point, the device operates similar to a triode. However, magnetic control, due to hysteresis and other effects, results in a slower and less faithful response to control current than electrostatic control using a control grid in a conventional triode (not to mention greater weight and complexity), so magnetrons saw limited use in conventional electronic designs. It was noticed that when the magnetron was operating at the critical value, it would emit energy in the radio frequency spectrum. This occurs because a few of the electrons, instead of reaching the anode, continue to circle in the space between the cathode and the anode. Due to an effect now known as cyclotron radiation, these electrons radiate radio frequency energy. The effect is not very efficient. Eventually the electrons hit one of the electrodes, so the number in the circulating state at any given time is a small percentage of the overall current. It was also noticed that the frequency of the radiation depends on the size of the tube, and even early examples were built that produced signals in the microwave regime. Early conventional tube systems were limited to the high frequency bands, and although very high frequency systems became widely available in the late 1930s, the ultra high frequency and microwave bands were well beyond the ability of conventional circuits. The magnetron was one of the few devices able to generate signals in the microwave band and it was the only one that was able to produce high power at centimeter wavelengths. Split-anode magnetron The original magnetron was very difficult to keep operating at the critical value, and even then the number of electrons in the circling state at any time was fairly low. This meant that it produced very low-power signals. Nevertheless, as one of the few devices known to create microwaves, interest in the device and potential improvements was widespread. The first major improvement was the split-anode magnetron, also known as a negative-resistance magnetron. As the name implies, this design used an anode that was split in two—one at each end of the tube—creating two half-cylinders. When both were charged to the same voltage the system worked like the original model. But by slightly altering the voltage of the two plates, the electrons' trajectory could be modified so that they would naturally travel towards the lower voltage side. The plates were connected to an oscillator that reversed the relative voltage of the two plates at a given frequency. At any given instant, the electron will naturally be pushed towards the lower-voltage side of the tube. The electron will then oscillate back and forth as the voltage changes. At the same time, a strong magnetic field is applied, stronger than the critical value in the original design. This would normally cause the electron to circle back to the cathode, but due to the oscillating electrical field, the electron instead follows a looping path that continues toward the anodes. Since all of the electrons in the flow experienced this looping motion, the amount of RF energy being radiated was greatly improved. And as the motion occurred at any field level beyond the critical value, it was no longer necessary to carefully tune the fields and voltages, and the overall stability of the device was greatly improved. Unfortunately, the higher field also meant that electrons often circled back to the cathode, depositing their energy on it and causing it to heat up. As this normally causes more electrons to be released, it could sometimes lead to a runaway effect, damaging the device. Cavity magnetron The great advance in magnetron design was the resonant cavity magnetron or electron-resonance magnetron, which works on entirely different principles. In this design the oscillation is created by the physical shape of the anode, rather than external circuits or fields. Mechanically, the cavity magnetron consists of a large, solid cylinder of metal with a hole drilled through the centre of the circular face. A wire acting as the cathode is run down the center of this hole, and the metal block itself forms the anode. Around this hole, known as the "interaction space", are a number of similar holes ("resonators") drilled parallel to the interaction space, connected to the interaction space by a short channel. The resulting block looks something like the cylinder on a revolver, with a somewhat larger central hole. Early models were cut using Colt pistol jigs. Remembering that in an AC circuit the electrons travel along the surface, not the core, of the conductor, the parallel sides of the slot act as a capacitor while the round holes form an inductor: an LC circuit made of solid copper, with the resonant frequency defined entirely by its dimensions. The magnetic field is set to a value well below the critical, so the electrons follow curved paths towards the anode. When they strike the anode, they cause it to become negatively charged in that region. As this process is random, some areas will become more or less charged than the areas around them. The anode is constructed of a highly conductive material, almost always copper, so these differences in voltage cause currents to appear to even them out. Since the current has to flow around the outside of the cavity, this process takes time. During that time additional electrons will avoid the hot spots and be deposited further along the anode, as the additional current flowing around it arrives too. This causes an oscillating current to form as the current tries to equalize one spot, then another. The oscillating currents flowing around the cavities, and their effect on the electron flow within the tube, cause large amounts of microwave radiofrequency energy to be generated in the cavities. The cavities are open on one end, so the entire mechanism forms a single, larger, microwave oscillator. A "tap", normally a wire formed into a loop, extracts microwave energy from one of the cavities. In some systems the tap wire is replaced by an open hole, which allows the microwaves to flow into a waveguide. As the oscillation takes some time to set up, and is inherently random at the start, subsequent startups will have different output parameters. Phase is almost never preserved, which makes the magnetron difficult to use in phased array systems. Frequency also drifts from pulse to pulse, a more difficult problem for a wider array of radar systems. Neither of these present a problem for continuous-wave radars, nor for microwave ovens. Common features All cavity magnetrons consist of a heated cylindrical cathode at a high (continuous or pulsed) negative potential created by a high-voltage, direct-current power supply. The cathode is placed in the center of an evacuated, lobed, circular metal chamber. The walls of the chamber are the anode of the tube. A magnetic field parallel to the axis of the cavity is imposed by a permanent magnet. The electrons initially move radially outward from the cathode attracted by the electric field of the anode walls. The magnetic field causes the electrons to spiral outward in a circular path, a consequence of the Lorentz force. Spaced around the rim of the chamber are cylindrical cavities. Slots are cut along the length of the cavities that open into the central, common cavity space. As electrons sweep past these slots, they induce a high-frequency radio field in each resonant cavity, which in turn causes the electrons to bunch into groups. A portion of the radio frequency energy is extracted by a short coupling loop that is connected to a waveguide (a metal tube, usually of rectangular cross section). The waveguide directs the extracted RF energy to the load, which may be a cooking chamber in a microwave oven or a high-gain antenna in the case of radar. The size of the cavities determine the resonant frequency, and thereby the frequency of the emitted microwaves. However, the frequency is not precisely controllable. The operating frequency varies with changes in load impedance, with changes in the supply current, and with the temperature of the tube. This is not a problem in uses such as heating, or in some forms of radar where the receiver can be synchronized with an imprecise magnetron frequency. Where precise frequencies are needed, other devices, such as the klystron are used. The magnetron is a self-oscillating device requiring no external elements other than a power supply. A well-defined threshold anode voltage must be applied before oscillation will build up; this voltage is a function of the dimensions of the resonant cavity, and the applied magnetic field. In pulsed applications there is a delay of several cycles before the oscillator achieves full peak power, and the build-up of anode voltage must be coordinated with the build-up of oscillator output. Where there are an even number of cavities, two concentric rings can connect alternate cavity walls to prevent inefficient modes of oscillation. This is called pi-strapping because the two straps lock the phase difference between adjacent cavities at π radians (180°). The modern magnetron is a fairly efficient device. In a microwave oven, for instance, a 1.1-kilowatt input will generally create about 700 watts of microwave power, an efficiency of around 65%. (The high-voltage and the properties of the cathode determine the power of a magnetron.) Large S band magnetrons can produce up to 2.5 megawatts peak power with an average power of 3.75 kW. Some large magnetrons are water cooled. The magnetron remains in widespread use in roles which require high power, but where precise control over frequency and phase is unimportant. Applications Radar In a radar set, the magnetron's waveguide is connected to an antenna. The magnetron is operated with very short pulses of applied voltage, resulting in a short pulse of high-power microwave energy being radiated. As in all primary radar systems, the radiation reflected from a target is analyzed to produce a radar map on a screen. Several characteristics of the magnetron's output make radar use of the device somewhat problematic. The first of these factors is the magnetron's inherent instability in its transmitter frequency. This instability results not only in frequency shifts from one pulse to the next, but also a frequency shift within an individual transmitted pulse. The second factor is that the energy of the transmitted pulse is spread over a relatively wide frequency spectrum, which requires the receiver to have a correspondingly wide bandwidth. This wide bandwidth allows ambient electrical noise to be accepted into the receiver, thus obscuring somewhat the weak radar echoes, thereby reducing overall receiver signal-to-noise ratio and thus performance. The third factor, depending on application, is the radiation hazard caused by the use of high-power electromagnetic radiation. In some applications, for example, a marine radar mounted on a recreational vessel, a radar with a magnetron output of 2 to 4 kilowatts is often found mounted very near an area occupied by crew or passengers. In practical use these factors have been overcome, or merely accepted, and there are today thousands of magnetron aviation and marine radar units in service. Recent advances in aviation weather-avoidance radar and in marine radar have successfully replaced the magnetron with microwave semiconductor oscillators, which have a narrower output frequency range. These allow a narrower receiver bandwidth to be used, and the higher signal-to-noise ratio in turn allows a lower transmitter power, reducing exposure to EMR. Heating In microwave ovens, the waveguide leads to a radio-frequency-transparent port into the cooking chamber. As the fixed dimensions of the chamber and its physical closeness to the magnetron would normally create standing wave patterns in the chamber, the pattern is randomized by a motorized fan-like mode stirrer'' in the waveguide (more often in commercial ovens), or by a turntable that rotates the food (most common in consumer ovens). An early example of this application was when British scientists in 1954 used a microwave oven to resurrect cryogenically frozen hamsters. Lighting In microwave-excited lighting systems, such as a sulfur lamp, a magnetron provides the microwave field that is passed through a waveguide to the lighting cavity containing the light-emitting substance (e.g., sulfur, metal halides, etc.). Although efficient, these lamps are much more complex than other methods of lighting and therefore not commonly used. More modern variants use HEMTs or GaN-on-SiC power semiconductor devices instead of magnetrons to generate the microwaves, which are substantially less complex and can be adjusted to maximize light output using a PID controller. History In 1910, Hans Gerdien (1877–1951) of the Siemens Corporation invented a magnetron. In 1912, Swiss physicist Heinrich Greinacher was looking for new ways to calculate the electron mass. He settled on a system consisting of a diode with a cylindrical anode surrounding a rod-shaped cathode, placed in the middle of a magnet. The attempt to measure the electron mass failed because he was unable to achieve a good vacuum in the tube. However, as part of this work, Greinacher developed mathematical models of the motion of the electrons in the crossed magnetic and electric fields. In the US, Albert Hull put this work to use in an attempt to bypass Western Electric's patents on the triode. Western Electric had gained control of this design by buying Lee De Forest's patents on the control of current flow using electric fields via the "grid". Hull intended to use a variable magnetic field, instead of an electrostatic one, to control the flow of the electrons from the cathode to the anode. Working at General Electric's Research Laboratories in Schenectady, New York, Hull built tubes that provided switching through the control of the ratio of the magnetic and electric field strengths. He released several papers and patents on the concept in 1921. Hull's magnetron was not originally intended to generate VHF (very-high-frequency) electromagnetic waves. However, in 1924, Czech physicist August Žáček (1886–1961) and German physicist Erich Habann (1892–1968) independently discovered that the magnetron could generate waves of 100 megahertz to 1 gigahertz. Žáček, a professor at Prague's Charles University, published first; however, he published in a journal with a small circulation and thus attracted little attention. Habann, a student at the University of Jena, investigated the magnetron for his doctoral dissertation of 1924. Throughout the 1920s, Hull and other researchers around the world worked to develop the magnetron. Most of these early magnetrons were glass vacuum tubes with multiple anodes. However, the two-pole magnetron, also known as a split-anode magnetron, had relatively low efficiency. While radar was being developed during World War II, there arose an urgent need for a high-power microwave generator that worked at shorter wavelengths, around 10 cm (3 GHz), rather than the 50 to 150 cm (200 MHz) that was available from tube-based generators of the time. It was known that a multi-cavity resonant magnetron had been developed and patented in 1935 by Hans Hollmann in Berlin. However, the German military considered the frequency drift of Hollman's device to be undesirable, and based their radar systems on the klystron instead. But klystrons could not at that time achieve the high power output that magnetrons eventually reached. This was one reason that German night fighter radars, which never strayed beyond the low-UHF band to start with for front-line aircraft, were not a match for their British counterparts. Likewise, in the UK, Albert Beaumont Wood proposed in 1937 a system with "six or eight small holes" drilled in a metal block, differing from the later production designs only in the aspects of vacuum sealing. However, his idea was rejected by the Navy, who said their valve department was far too busy to consider it. In 1940, at the University of Birmingham in the UK, John Randall and Harry Boot produced a working prototype of a cavity magnetron that produced about 400 W. Within a week this had improved to 1 kW, and within the next few months, with the addition of water cooling and many detail changes, this had improved to 10 and then 25 kW. To deal with its drifting frequency, they sampled the output signal and synchronized their receiver to whatever frequency was actually being generated. In 1941, the problem of frequency instability was solved by James Sayers coupling ("strapping") alternate cavities within the magnetron, which reduced the instability by a factor of 5–6. (For an overview of early magnetron designs, including that of Boot and Randall, see .) GEC at Wembley made 12 prototype cavity magnetrons in August 1940, and No 12 was sent to America with Bowen on the Tizard Mission, where it was shown on 19 September 1940 in Alfred Loomis’ apartment. The American NDRC Microwave Committee was stunned at the power level produced. However Bell Labs' director was upset when it was X-rayed and had eight holes rather than the six holes shown on the GEC plans. After contacting (via the transatlantic cable) Dr Eric Megaw, GEC’s vacuum tube expert Megaw recalled that when he had asked for 12 prototypes he said make 10 with 6 holes, one with 7 and one with 8; there was no time to amend the drawings. And No 12 with 8 holes was chosen for the Tizard Mission. So Bell Labs chose to copy the sample; and while early British magnetrons had six cavities the American ones had eight cavities. According to Andy Manning from the RAF Air Defence Radar Museum, Randall and Boot's discovery was "a massive, massive breakthrough" and "deemed by many, even now [2007], to be the most important invention that came out of the Second World War", while professor of military history at the University of Victoria in British Columbia, David Zimmerman, states: Because France had just fallen to the Nazis and Britain had no money to develop the magnetron on a massive scale, Winston Churchill agreed that Sir Henry Tizard should offer the magnetron to the Americans in exchange for their financial and industrial help. An early 10 kW version, built in England by the General Electric Company Research Laboratories in Wembley, London, was taken on the Tizard Mission in September 1940. As the discussion turned to radar, the US Navy representatives began to detail the problems with their short-wavelength systems, complaining that their klystrons could only produce 10 W. With a flourish, "Taffy" Bowen pulled out a magnetron and explained it produced 1000 times that. Bell Telephone Laboratories took the example and quickly began making copies, and before the end of 1940, the Radiation Laboratory had been set up on the campus of the Massachusetts Institute of Technology to develop various types of radar using the magnetron. By early 1941, portable centimetric airborne radars were being tested in American and British aircraft. In late 1941, the Telecommunications Research Establishment in the United Kingdom used the magnetron to develop a revolutionary airborne, ground-mapping radar codenamed H2S. The H2S radar was in part developed by Alan Blumlein and Bernard Lovell. The cavity magnetron was widely used during World War II in microwave radar equipment and is often credited with giving Allied radar a considerable performance advantage over German and Japanese radars, thus directly influencing the outcome of the war. It was later described by American historian James Phinney Baxter III as "[t]he most valuable cargo ever brought to our shores". Centimetric radar, made possible by the cavity magnetron, allowed for the detection of much smaller objects and the use of much smaller antennas. The combination of small-cavity magnetrons, small antennas, and high resolution allowed small, high quality radars to be installed in aircraft. They could be used by maritime patrol aircraft to detect objects as small as a submarine periscope, which allowed aircraft to attack and destroy submerged submarines which had previously been undetectable from the air. Centimetric contour mapping radars like H2S improved the accuracy of Allied bombers used in the strategic bombing campaign, despite the existence of the German FuG 350 Naxos device to specifically detect it. Centimetric gun-laying radars were likewise far more accurate than the older technology. They made the big-gunned Allied battleships more deadly and, along with the newly developed proximity fuze, made anti-aircraft guns much more dangerous to attacking aircraft. The two coupled together and used by anti-aircraft batteries, placed along the flight path of German V-1 flying bombs on their way to London, are credited with destroying many of the flying bombs before they reached their target. Since then, many millions of cavity magnetrons have been manufactured; while some have been for radar the vast majority have been for microwave ovens. The use in radar itself has dwindled to some extent, as more accurate signals have generally been needed and developers have moved to klystron and traveling-wave tube systems for these needs. Health hazards At least one hazard in particular is well known and documented. As the lens of the eye has no cooling blood flow, it is particularly prone to overheating when exposed to microwave radiation. This heating can in turn lead to a higher incidence of cataracts in later life. There is also a considerable electrical hazard around magnetrons, as they require a high voltage power supply. Most magnetrons contain a small amount of beryllium oxide for the insulators, and thorium mixed with tungsten in their filament. Exceptions to this are higher power magnetrons that operate above approximately 10,000 volts where positive ion bombardment becomes damaging to thorium metal, hence pure tungsten (potassium doped) is used. While thorium is a radioactive metal, the risk of cancer is low as it never gets airborne in normal usage. Only if the filament is taken out of the magnetron, finely crushed, and inhaled can it pose a health hazard.
Technology
Components
null
20866
https://en.wikipedia.org/wiki/Metamorphosis
Metamorphosis
Metamorphosis is a biological process by which an animal physically develops including birth transformation or hatching, involving a conspicuous and relatively abrupt change in the animal's body structure through cell growth and differentiation. Some insects, jellyfish, fish, amphibians, mollusks, crustaceans, cnidarians, echinoderms, and tunicates undergo metamorphosis, which is often accompanied by a change of nutrition source or behavior. Animals can be divided into species that undergo complete metamorphosis ("holometaboly"), incomplete metamorphosis ("hemimetaboly"), or no metamorphosis ("ametaboly"). Generally organisms with a larval stage undergo metamorphosis, and during metamorphosis the organism loses larval characteristics. Etymology The word metamorphosis derives from Ancient Greek , "transformation, transforming", from (), "after" and (), "form". Hormonal control In insects, growth and metamorphosis are controlled by hormones synthesized by endocrine glands near the front of the body (anterior). Neurosecretory cells in an insect's brain secrete a hormone, the prothoracicotropic hormone (PTTH) that activates prothoracic glands, which secrete a second hormone, usually ecdysone (an ecdysteroid), that induces ecdysis (shedding of the exoskeleton). PTTH also stimulates the corpora allata, a retrocerebral organ, to produce juvenile hormone, which prevents the development of adult characteristics during ecdysis. In holometabolous insects, molts between larval instars have a high level of juvenile hormone, the moult to the pupal stage has a low level of juvenile hormone, and the final, or imaginal, molt has no juvenile hormone present at all. Experiments on firebugs have shown how juvenile hormone can affect the number of nymph instar stages in hemimetabolous insects. In chordates, metamorphosis is iodothyronine-induced and an ancestral feature of all chordates. Insects All three categories of metamorphosis can be found in the diversity of insects, including no metamorphosis ("ametaboly"), incomplete or partial metamorphosis ("hemimetaboly"), and complete metamorphosis ("holometaboly"). While ametabolous insects show very little difference between larval and adult forms (also known as "direct development"), both hemimetabolous and holometabolous insects have significant morphological and behavioral differences between larval and adult forms, the most significant being the inclusion, in holometabolous organisms, of a pupal or resting stage between the larval and adult forms. Development and terminology In hemimetabolous insects, immature stages are called nymphs. Development proceeds in repeated stages of growth and ecdysis (moulting); these stages are called instars. The juvenile forms closely resemble adults, but are smaller and lack adult features such as wings and genitalia. The size and morphological differences between nymphs in different instars are small, often just differences in body proportions and the number of segments; in later instars, external wing buds form. The period from one molt to the next is called a stadium. In holometabolous insects, immature stages are called larvae and differ markedly from adults. Insects which undergo holometabolism pass through a larval stage, then enter an inactive state called pupa (called a "chrysalis" in butterfly species), and finally emerge as adults. Evolution The earliest insect forms showed direct development (ametabolism), and the evolution of metamorphosis in insects is thought to have fuelled their dramatic radiation (1,2). Some early ametabolous "true insects" are still present today, such as bristletails and silverfish. Hemimetabolous insects include cockroaches, grasshoppers, dragonflies, and true bugs. Phylogenetically, all insects in the Pterygota undergo a marked change in form, texture and physical appearance from immature stage to adult. These insects either have hemimetabolous development, and undergo an incomplete or partial metamorphosis, or holometabolous development, which undergo a complete metamorphosis, including a pupal or resting stage between the larval and adult forms. A number of hypotheses have been proposed to explain the evolution of holometaboly from hemimetaboly, mostly centering on whether or not the intermediate stages of hemimetabolous forms are homologous in origin to the pupal stage of holometabolous forms. Temperature-dependent metamorphosis According to a 2009 study, temperature plays an important role in insect development as individual species are found to have specific thermal windows that allow them to progress through their developmental stages. These windows are not significantly affected by ecological traits, rather, the windows are phylogenetically adapted to the ecological circumstances insects are living in. Recent research According to research from 2008, adult Manduca sexta is able to retain behavior learned as a caterpillar. Another caterpillar, the ornate moth caterpillar, is able to carry toxins that it acquires from its diet through metamorphosis and into adulthood, where the toxins still serve for protection against predators. Many observations published in 2002, and supported in 2013 indicate that programmed cell death plays a considerable role during physiological processes of multicellular organisms, particularly during embryogenesis, and metamorphosis. Additional research in 2019 found that both autophagy and apoptosis, the two ways programmed cell death occur, are processes undergone during insect metamorphosis. Below is the sequence of steps in the metamorphosis of the butterfly (illustrated): 1 – The larva of a butterfly 2 – The pupa is now spewing the thread to form chrysalis 3 – The chrysalis is fully formed 4 – Adult butterfly coming out of the chrysalis Chordata Amphioxus In cephalochordata, metamorphosis is iodothyronine-induced and it could be an ancestral feature of all chordates. Fish Some fish, both bony fish (Osteichthyes) and jawless fish (Agnatha), undergo metamorphosis. Fish metamorphosis is typically under strong control by the thyroid hormone. Examples among the non-bony fish include the lamprey. Among the bony fish, mechanisms are varied. The salmon is diadromous, meaning that it changes from a freshwater to a saltwater lifestyle. Many species of flatfish begin their life bilaterally symmetrical, with an eye on either side of the body; but one eye moves to join the other side of the fish – which becomes the upper side – in the adult form. The European eel has a number of metamorphoses, from the larval stage to the leptocephalus stage, then a quick metamorphosis to glass eel at the edge of the continental shelf (eight days for the Japanese eel), two months at the border of fresh and salt water where the glass eel undergoes a quick metamorphosis into elver, then a long stage of growth followed by a more gradual metamorphosis to the migrating phase. In the pre-adult freshwater stage, the eel also has phenotypic plasticity because fish-eating eels develop very wide mandibles, making the head look blunt. Leptocephali are common, occurring in all Elopomorpha (tarpon- and eel-like fish). Most other bony fish undergo metamorphosis initially from egg to immotile larvae known as sac fry (fry with a yolk sac), then to motile larvae (often known as fingerlings due to them roughly reaching the length of a human finger) that have to forage for themselves after the yolk sac resorbs, and then to the juvenile stage where the fish progressively start to resemble adult morphology and behaviors until finally reaching sexual maturity. Amphibians In typical amphibian development, eggs are laid in water and larvae are adapted to an aquatic lifestyle. Frogs, toads, and newts all hatch from the eggs as larvae with external gills but it will take some time for the amphibians to interact outside with pulmonary respiration. Afterwards, newt larvae start a predatory lifestyle, while tadpoles mostly scrape food off surfaces with their horny tooth ridges. Metamorphosis in amphibians is regulated by thyroxin concentration in the blood, which stimulates metamorphosis, and prolactin, which counteracts its effect. Specific events are dependent on threshold values for different tissues. Because most embryonic development is outside the parental body, development is subject to many adaptations due to specific ecological circumstances. For this reason tadpoles can have horny ridges for teeth, whiskers, and fins. They also make use of the lateral line organ. After metamorphosis, these organs become redundant and will be resorbed by controlled cell death, called apoptosis. The amount of adaptation to specific ecological circumstances is remarkable, with many discoveries still being made. Frogs and toads With frogs and toads, the external gills of the newly hatched tadpole are covered with a gill sac after a few days, and lungs are quickly formed. Front legs are formed under the gill sac, and hindlegs are visible a few days later. Following that there is usually a longer stage during which the tadpole lives off a vegetarian diet. Tadpoles use a relatively long, spiral‐shaped gut to digest that diet. Recent studies suggest tadpoles do not have a balanced homeostatic feedback control system until the beginning stages of metamorphosis. At this point, their long gut shortens and begins favoring the diet of insects. Rapid changes in the body can then be observed as the lifestyle of the frog changes completely. The spiral‐shaped mouth with horny tooth ridges is resorbed together with the spiral gut. The animal develops a big jaw, and its gills disappear along with its gill sac. Eyes and legs grow quickly, a tongue is formed, and all this is accompanied by associated changes in the neural networks (development of stereoscopic vision, loss of the lateral line system, etc.) All this can happen in about a day. It is not until a few days later that the tail is reabsorbed, due to the higher thyroxin concentrations required for tail resorption. Salamanders Salamander development is highly diverse; some species go through a dramatic reorganization when transitioning from aquatic larvae to terrestrial adults, while others, such as the axolotl, display pedomorphosis and never develop into terrestrial adults. Within the genus Ambystoma, species have evolved to be pedomorphic several times, and pedomorphosis and complete development can both occur in some species. Newts In newts, metamorphosis occurs due to the change in habitat, not a change in diet, because newt larvae already feed as predators and continue doing so as adults. Newts' gills are never covered by a gill sac and will be resorbed only just before the animal leaves the water. Adults can move faster on land than in water. Newts often have an aquatic phase in spring and summer, and a land phase in winter. For adaptation to a water phase, prolactin is the required hormone, and for adaptation to the land phase, thyroxin. External gills do not return in subsequent aquatic phases because these are completely absorbed upon leaving the water for the first time. Caecilians Basal caecilians such as Ichthyophis go through a metamorphosis in which aquatic larva transition into fossorial adults, which involves a loss of the lateral line. More recently diverged caecilians (the Teresomata) do not undergo an ontogenetic niche shift of this sort and are in general fossorial throughout their lives. Thus, most caecilians do not undergo an anuran-like metamorphosis.
Biology and health sciences
Animal ontogeny
null
20874
https://en.wikipedia.org/wiki/Mycology
Mycology
Mycology is the branch of biology concerned with the study of fungi, including their taxonomy, genetics, biochemical properties, and use by humans. Fungi can be a source of tinder, food, traditional medicine, as well as entheogens, poison, and infection. Yeasts are among the most heavily utilized members of the Kingdom Fungi, particularly in food manufacturing. Mycology branches into the field of phytopathology, the study of plant diseases. The two disciplines are closely related, because the vast majority of plant pathogens are fungi. A biologist specializing in mycology is called a mycologist. Overview The word mycology comes from the Ancient Greek: μύκης (mukēs), meaning "fungus" and the suffix (-logia), meaning "study." Pioneer mycologists included Elias Magnus Fries, Christiaan Hendrik Persoon, Heinrich Anton de Bary, Elizabeth Eaton Morse, and Lewis David de Schweinitz. Beatrix Potter, author of The Tale of Peter Rabbit, also made significant contributions to the field. Pier Andrea Saccardo developed a system for classifying the imperfect fungi by spore color and form, which became the primary system used before classification by DNA analysis. He is most famous for his Sylloge Fungorum, which was a comprehensive list of all of the names that had been used for mushrooms. Sylloge is still the only work of this kind that was both comprehensive for the botanical kingdom Fungi and reasonably modern. Many fungi produce toxins, antibiotics, and other secondary metabolites. For example, the cosmopolitan genus Fusarium and their toxins associated with fatal outbreaks of alimentary toxic aleukia in humans were extensively studied by Abraham Z. Joffe. Fungi are fundamental for life on earth in their roles as symbionts, e.g. in the form of mycorrhizae, insect symbionts, and lichens. Many fungi are able to break down complex organic biomolecules such as lignin, the more durable component of wood, and pollutants such as xenobiotics, petroleum, and polycyclic aromatic hydrocarbons. By decomposing these molecules, fungi play a critical role in the global carbon cycle. Fungi and other organisms traditionally recognized as fungi, such as oomycetes and myxomycetes (slime molds), often are economically and socially important, as some cause diseases of animals (including humans) and of plants. Apart from pathogenic fungi, many fungal species are very important in controlling the plant diseases caused by different pathogens. For example, species of the filamentous fungal genus Trichoderma are considered one of the most important biological control agents as an alternative to chemical-based products for effective crop diseases management. Field meetings to find interesting species of fungi are known as 'forays', after the first such meeting organized by the Woolhope Naturalists' Field Club in 1868 and entitled "A foray among the funguses". Some fungi can cause disease in humans and other animals; the study of pathogenic fungi that infect animals is referred to as medical mycology. History It is believed that humans started collecting mushrooms as food in prehistoric times. Mushrooms were first written about in the works of Euripides (480–406 BC). The Greek philosopher Theophrastos of Eresos (371–288 BC) was perhaps the first to try to systematically classify plants; mushrooms were considered to be plants missing certain organs. It was later Pliny the Elder (23–79 AD), who wrote about truffles in his encyclopedia Natural History. The Middle Ages saw little advancement in the body of knowledge about fungi. However, the invention of the printing press allowed authors to dispel superstitions and misconceptions about the fungi that had been perpetuated by the classical authors. The start of the modern age of mycology begins with Pier Antonio Micheli's 1737 publication of Nova plantarum genera. Published in Florence, this seminal work laid the foundations for the systematic classification of grasses, mosses and fungi. He originated the still current genus names Polyporus and Tuber, both dated 1729 (though the descriptions were later amended as invalid by modern rules). The founding nomenclaturist Carl Linnaeus included fungi in his binomial naming system in 1753, where each type of organism has a two-word name consisting of a genus and species (whereas up to then organisms were often designated with Latin phrases containing many words). He originated the scientific names of numerous well-known mushroom taxa, such as Boletus and Agaricus, which are still in use today. During this period, fungi were still considered to belong to the plant kingdom, so they were categorized in his Species Plantarum. Linnaeus' fungal taxa were not nearly as comprehensive as his plant taxa, however, grouping together all gilled mushrooms with a stem in genus Agaricus. Thousands of gilled species exist, which were later divided into dozens of diverse genera; in its modern usage, Agaricus only refers to mushrooms closely related to the common shop mushroom, Agaricus bisporus. For example, Linnaeus gave the name Agaricus deliciosus to the saffron milk-cap, but its current name is Lactarius deliciosus. On the other hand, the field mushroom Agaricus campestris has kept the same name ever since Linnaeus's publication. The English word "agaric" is still used for any gilled mushroom, which corresponds to Linnaeus's use of the word. Although mycology was historically considered a branch of botany, the 1969 discovery of fungi's close evolutionary relationship to animals resulted in the study's reclassification as an independent field. The term mycology and the complementary term mycologist are traditionally attributed to M.J. Berkeley in 1836. However, mycologist appeared in writings by English botanist Robert Kaye Greville as early as 1823 in reference to Schweinitz. Scope and importance Production, trade, and food manufacturing Lumber and timber products are a key element of international trade, as they are used for all things from architecture to firewood. The cultivation of forested ecosystems to produce this amount of usable wood is highly dependent on the mycorrhizal symbiotic relationships between plants, specifically trees, and fungi. The fungi provide a great number of benefits to their symbiotic plant partner, such as disease tolerance, improved growth and mineral nutrition, stress tolerance, and even fertilizer utilization. Another major component of international trade over recent years has been edible and medicinal mushrooms. While many fungal species can be cultivated in large farming installations, the cultivation of some coveted species has yet to be fully understood, which means that there are many species that can only be found naturally in the wild. While the demand of wild mushroom species has increased worldwide over recent years, the rarity of these species has not changed. Even still, mushroom hunting has become a key factor in local economies. Increased scientific knowledge of fungal diversity has led to biotechnological advances in food manufacturing. Humans have utilized this knowledge by cultivating various types of fungi, particularly yeasts. There are over 500 species of yeasts that have been cultivated for different purposes, the most common of which is Saccharomyces cerevisiae, also known as baker's yeast. As its common name suggests, S. cerevisiae has been used for winemaking, baking, and brewing since ancient times. Fermentation is one of the earliest forms of food preservation, with the earliest recorded use dating back over 13,000 years ago in Israel. The cultivation of bacteria and fungi, particularly yeasts, have been used for centuries to increase the storage life of meats, vegetables, grains, and other foods. Fermentation also plays a significant role in the production of various food products and alcoholic beverages such as beer and wine. About 90% of the world's beer production comes from lager beer and 5% from ale beer, while the rest is from spontaneous fermentation of a variety of yeasts and bacteria. Production of alcoholic beverages play significant roles in the economics of many countries, with beer often being a crucial export. Plant pathogenic fungi Plant pathogenic fungi are a serious threat when it comes to crop availability and food security. These fungi can infiltrate plants and food crops, which can cause serious economic issues for agricultural industries in numerous countries. Various plant pathogens can cause cash crops to become inedible and virtually useless to the farmer that is growing them. This problem has increased over the years as the usage of monocultures have become more prevalent: a limited variety of plants in one area can lead to the rapid spread of specific pathogens. Puccinia graminis is a type of stem rust that targets wheat crops worldwide from Africa to Europe. Another devastating fungal pathogen is Sarocladium oryzae, which is a type of sheath rot fungus prevalent in India and is a great threat to rice cultivation. Historically, one of the more well-known cases of plant-fungal pandemics was the potato blight of Ireland, which was caused by a water mold known as Phytophthora infestans. This event is known as the Great Famine of Ireland. Mycology and drug discovery For centuries, certain mushrooms have been documented as a folk medicine in China, Japan, and Russia. Although the use of mushrooms in folk medicine is centered largely on the Asian continent, people in other parts of the world like the Middle East, Poland, and Belarus have been documented using mushrooms for medicinal purposes. Mushrooms produce large amounts of vitamin D when exposed to ultraviolet (UV) light. Penicillin, ciclosporin, griseofulvin, cephalosporin and psilocybin are examples of drugs that have been isolated from molds or other fungi.
Biology and health sciences
Basics
null
20876
https://en.wikipedia.org/wiki/Mimosa
Mimosa
Mimosa is a genus of about 600 species of herbs and shrubs, in the mimosoid clade of the legume family Fabaceae. Species are native to the Americas, from North Dakota to northern Argentina, and to eastern Africa (Tanzania, Mozambique, and Madagascar) as well as the Indian subcontinent and Indochina. The generic name is derived from the Greek word (mimos), 'actor' or 'mime', and the feminine suffix -osa, 'resembling', suggesting its 'sensitive leaves' which seem to 'mimic conscious life'. Two species in the genus are especially notable. One is Mimosa pudica, commonly known as touch-me-not, which folds its leaves when touched or exposed to heat. It is native to southern Central and South America but is widely cultivated elsewhere for its curiosity value, both as a houseplant in temperate areas, and outdoors in the tropics. Outdoor cultivation has led to weedy invasion in some areas, notably Hawaii. The other is Mimosa tenuiflora, which is best known for its use in shamanic ayahuasca brews due to the psychedelic drug dimethyltryptamine found in its root bark. Taxonomy The taxonomy of the genus Mimosa has gone through several periods of splitting and lumping, ultimately accumulating over 3,000 names, many of which have either been synonymized under other species or transferred to other genera. In part due to these changing circumscriptions, the name "Mimosa" has also been applied to several other related species with similar pinnate or bipinnate leaves, but are now classified in other genera. The most common examples of this are Albizia julibrissin (Persian silk tree) and Acacia dealbata (wattle). Description Members of this genus are among the few plants capable of rapid movement; examples outside of Mimosa include the telegraph plant, Aldrovanda, some species of Drosera and the Venus flytrap. The leaves of the Mimosa pudica close quickly when touched. Some mimosas raise their leaves in the day and lower them at night, and experiments done by Jean-Jacques d'Ortous de Mairan on mimosas in 1729 provided the first evidence of biological clocks. Mimosa can be distinguished from the large related genera, Acacia and Albizia, since its flowers have ten or fewer stamens. Botanically, what appears to be a single globular flower is actually a cluster of many individual ones. Mimosas contain some level of heptanoic acid. Species There are about 590 species including: Mimosa aculeaticarpa Ortega Mimosa andina Benth. Mimosa arenosa (Willd.) Poir. Mimosa asperata L. Mimosa borealis Gray Mimosa caesalpiniaefolia Benth. Mimosa casta L. Mimosa cupica Gray Mimosa ceratonia L. Mimosa diplotricha C.Wright ex Sauvalle Mimosa disperma Barneby Mimosa distachya Cav. Mimosa dysocarpa Benth. Mimosa emoryana Benth. Mimosa grahamii Gray Mimosa hamata Willd. Mimosa hystricina (Small ex Britt. et Rose) B.L.Turner Mimosa invisa Martius ex Colla Mimosa latidens (Small) B.L. Turner Mimosa laxiflora Benth. Mimosa loxensis Barneby Mimosa malacophylla Gray Mimosa microphylla Dry. Mimosa nothacacia Barneby Mimosa nuttallii (DC.) B.L. Turner Mimosa ophthalmocentra Mart. ex Benth. 1865 Mimosa pellita Kunth ex Willd. Mimosa pigra L. Mimosa polycarpa Kunth Mimosa pudica L. Mimosa quadrivalvis L. Mimosa quadrivalvis var. hystricina (Small) Barneby Mimosa quadrivalvis var. quadrivalvis L. Mimosa roemeriana Scheele Mimosa rubicaulis Lam. Mimosa rupertiana B.L. Turner Mimosa scabrella Benth. Mimosa schomburgkii Benth. Mimosa somnians Humb. & Bonpl. ex Willd. Mimosa strigillosa Torr. et Gray Mimosa tenuiflora (Willd.) Poir. (= Mimosa hostilis) Mimosa texana (Gray) Small Mimosa townsendii Barneby Mimosa turneri Barneby Mimosa verrucosa Benth.
Biology and health sciences
Fabales
null
20880
https://en.wikipedia.org/wiki/Mira
Mira
Mira (), designation Omicron Ceti (ο Ceti, abbreviated Omicron Cet, ο Cet), is a red-giant star estimated to be 200–300 light-years from the Sun in the constellation Cetus. ο Ceti is a binary stellar system, consisting of a variable red giant (Mira A) along with a white dwarf companion (Mira B). Mira A is a pulsating variable star and was the first non-supernova variable star discovered, with the possible exception of Algol. It is the prototype of the Mira variables. Nomenclature ο Ceti (Latinised to Omicron Ceti) is the star's Bayer designation. It was named Mira (Latin for 'wonderful' or 'astonishing') by Johannes Hevelius in his Historiola Mirae Stellae (1662). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Mira for this star. Observation history Evidence that the variability of Mira was known in ancient China, Babylon or Greece is at best only circumstantial. What is certain is that the variability of Mira was recorded by the astronomer David Fabricius beginning on August 3, 1596. Observing what he thought was the planet Mercury (later identified as Jupiter), he needed a reference star for comparing positions and picked a previously unremarked third-magnitude star nearby. By August 21, however, it had increased in brightness by one magnitude, then by October had faded from view. Fabricius assumed it was a nova, but then saw it again on February 16, 1609. In 1638 Johannes Holwarda determined a period of the star's reappearances, eleven months; he is often credited with the discovery of Mira's variability. Johannes Hevelius was observing it at the same time and named it Mira in 1662, for it acted like no other known star. Ismail Bouillaud then estimated its period at 333 days, less than one day off the modern value of 332 days. Bouillaud's measurement may not have been erroneous: Mira is known to vary slightly in period, and may even be slowly changing over time. The star is estimated to be a six-billion-year-old red giant. There is considerable speculation as to whether Mira had been observed prior to Fabricius. Certainly Algol's history (known for certain as a variable only in 1667, but with legends and such dating back to antiquity showing that it had been observed with suspicion for millennia) suggests that Mira might have been known, too. Karl Manitius, a modern translator of Hipparchus' Commentary on Aratus, has suggested that certain lines from that second-century text may be about Mira. The other pre-telescopic Western catalogs of Ptolemy, al-Sufi, Ulugh Beg and Tycho Brahe turn up no mentions, even as a regular star. There are three observations from Chinese and Korean archives, in 1596, 1070 and the same year when Hipparchus would have made his observation (134 BC) that are suggestive. An estimate obtained in 1925 from interferometry by Francis G. Pease at the Mount Wilson Observatory gave Mira a diameter of 250-260 million miles (402 to 418 million km, or approximately ), making it the then-second largest star known and comparable to historical estimates of Betelgeuse, surpassed only by Antares. On the contrary, Otto Struve thought of Mira as a red supergiant with an approximate radius of , while modern consensus accepts Mira to be a highly evolved asymptotic giant branch star. Distance and background Information Pre-Hipparcos estimates centered on 220 light-years; while Hipparcos data from the 2007 reduction suggest a distance of 299 light-years, with a margin of error of 11%. The age of Mira is suspected to be about 6 billion years old. Its gaseous material is scattered, as much as one-thousandth as thin as the air around us. Mira is also among the coolest known bright stars of the red giant class, with a temperature ranging from 3,000 to 4,000 degrees Fahrenheit (1,600 to 2,200 degrees Celsius). As with other long-period variables, Mira's deep red color at minimum pales to a lighter orange as the star brightens. Within the next few million years, Mira will discard its outer layers and become a planetary nebula, leaving behind a white dwarf. Stellar system This binary star system consists of a red giant (Mira, designated Mira A) undergoing mass loss and a high-temperature white dwarf companion (Mira B) that is accreting mass from the primary. Such an arrangement of stars is known as a symbiotic system and this is the closest such symbiotic pair to the Sun. Examination of this system by the Chandra X-ray Observatory shows a direct mass exchange along a bridge of matter from the primary to the white dwarf. The two stars are currently separated by about 70 astronomical units. Component A Mira A is currently an asymptotic giant branch (AGB) star, in the thermally pulsing AGB phase. Each pulse lasts a decade or more, and an amount of time on the order of 10,000 years passes between each pulse. With every pulse cycle Mira increases in luminosity and the pulses grow stronger. This is also causing dynamic instability in Mira, resulting in dramatic changes in luminosity and size over shorter, irregular time periods. The overall shape of Mira A has been observed to change, exhibiting pronounced departures from symmetry. These appear to be caused by bright spots on the surface that evolve their shape on time scales of 3–14 months. Observations of Mira A in the ultraviolet band by the Hubble Space Telescope have shown a plume-like feature pointing toward the companion star. Variability Mira A is a variable star, specifically the prototypical Mira variable. The 6,000 to 7,000 known stars of this class are all red giants whose surfaces pulsate in such a way as to increase and decrease in brightness over periods ranging from about 80 to more than 1,000 days. In the particular case of Mira, its increases in brightness take it up to about magnitude 3.5 on average, placing it among the brighter stars in the Cetus constellation. Individual cycles vary too; well-attested maxima go as high as magnitude 2.0 in brightness and as low as 4.9, a range almost 15 times in brightness, and there are historical suggestions that the real spread may be three times this or more. Minima range much less, and have historically been between 8.6 and 10.1, a factor of four times in luminosity. The total swing in brightness from absolute maximum to absolute minimum (two events which did not occur on the same cycle) is 1,700 times. Mira emits the vast majority of its radiation in the infrared, and its variability in that band is only about two magnitudes. The shape of its light curve is of an increase over about 100 days, and the return to minimum taking twice as long. Contemporary approximate maxima for Mira: Oct 21–31, 1999 Sep 21–30, 2000 Aug 21–31, 2001 Jul 21–31, 2002 Jun 21–30, 2003 May 21–31, 2004 Apr 11–20, 2005 Mar 11–20, 2006 Feb 1–10, 2007 Jan 21–31, 2008 Dec 21–31, 2008 Nov 21–30, 2009 Oct 21–31, 2010 Sep 21–30, 2011 Aug 27, 2012 Jul 26, 2013 May 12, 2014 Apr 9, 2015 Mar 6, 2016 Jan 31, 2017 Dec 29, 2017 Nov 26, 2018 Oct 24, 2019 Sep 20, 2020 Aug 18, 2021 Jul 16, 2022 Jun 13, 2023 May 10, 2024 From northern temperate latitudes, Mira is generally not visible between late March and June due to its proximity to the Sun. This means that at times several years can pass without it appearing as a naked-eye object. The pulsations of Mira variables cause the star to expand and contract, but also to change its temperature. The temperature is highest slightly after the visual maximum, and lowest slightly before minimum. The photosphere, measured at the Rosseland radius, is smallest just before visual maximum and close to the time of maximum temperature. The largest size is reached slightly before the time of lowest temperature. The bolometric luminosity is proportional to the fourth power of the temperature and the square of the radius, but the radius varies by over 20% and the temperature by less than 10%. In Mira, the highest luminosity occurs close to the time when the star is hottest and smallest. The visual magnitude is determined both by the luminosity and by the proportion of the radiation that occurs at visual wavelengths. Only a small proportion of the radiation is emitted at visual wavelengths and this proportion is very strongly influenced by the temperature (Planck's law). Combined with the overall luminosity changes, this creates the very big visual magnitude variation with the maximum occurring when the temperature is high. Infrared VLTI measurements of Mira at phases 0.13, 0.18, 0.26, 0.40 and 0.47, show that the radius varies from at phase 0.13 just after maximum to at phase 0.40 approaching minimum. The temperature at phase 0.13 is and at phase 0.26 about halfway from maximum to minimum. The luminosity is calculated to be at phase 0.13 and at phase 0.26. The pulsations of Mira have the effect of expanding its photosphere by around 50% compared to a non-pulsating star. In the case of Mira, if it was not pulsating it is modelled to have a radius of only around . Mass loss Ultraviolet studies of Mira by NASA's Galaxy Evolution Explorer (GALEX) space telescope have revealed that it sheds a trail of material from the outer envelope, leaving a tail 13 light-years in length, formed over tens of thousands of years. It is thought that a hot bow wave of compressed plasma/gas is the cause of the tail; the bow wave is a result of the interaction of the stellar wind from Mira A with gas in interstellar space, through which Mira is moving at an extremely high speed of . The tail consists of material stripped from the head of the bow wave, which is also visible in ultraviolet observations. Mira's bow shock will eventually evolve into a planetary nebula, the form of which will be considerably affected by the motion through the interstellar medium (ISM). Mira’s tail offers a unique opportunity to study how stars like our sun die and ultimately seed new solar systems. As Mira hurls along, its tail drops off carbon, oxygen and other important elements needed for new stars, planets, and possibly even life to form. This tail material, visible now for the first time, has been shed over the past 30,000 years. Component B The companion star is away from the main star. It was resolved by the Hubble Space Telescope in 1995, when it was 70 astronomical units from the primary; and results were announced in 1997. The HST ultraviolet images and later X-ray images by the Chandra space telescope show a spiral of gas rising off Mira in the direction of Mira B. The companion's orbital period around Mira is approximately 400 years. In 2007, observations showed a protoplanetary disc around the companion, Mira B. This disc is being accreted from material in the solar wind from Mira and could eventually form new planets. These observations also hinted that the companion was a main-sequence star of around 0.7 solar mass and spectral type K, instead of a white dwarf as originally thought. However, in 2010 further research indicated that Mira B is, in fact, a white dwarf.
Physical sciences
Notable stars
Astronomy
20901
https://en.wikipedia.org/wiki/Malware
Malware
Malware (a portmanteau of malicious software) is any software intentionally designed to cause disruption to a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, deprive access to information, or which unknowingly interferes with the user's computer security and privacy. Researchers tend to classify malware into one or more sub-types (i.e. computer viruses, worms, Trojan horses, logic bombs, ransomware, spyware, adware, rogue software, wipers and keyloggers). Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. The defense strategies against malware differ according to the type of malware but most can be thwarted by installing antivirus software, firewalls, applying regular patches, securing networks from intrusion, having regular backups and isolating infected systems. Malware can be designed to evade antivirus software detection algorithms. History The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata. John von Neumann showed that in theory a program could reproduce itself. This constituted a plausibility result in computability theory. Fred Cohen experimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability and self-obfuscation using rudimentary encryption. His 1987 doctoral dissertation was on the subject of computer viruses. The combination of cryptographic technology as part of the payload of the virus, exploiting it for attack purposes was initialized and investigated from the mid-1990s, and includes initial ransomware and evasion ideas. Before Internet access became widespread, viruses spread on personal computers by infecting executable programs or boot sectors of floppy disks. By inserting a copy of itself into the machine code instructions in these programs or boot sectors, a virus causes itself to be run whenever the program is run or the disk is booted. Early computer viruses were written for the Apple II and Mac, but they became more widespread with the dominance of the IBM PC and MS-DOS. The first IBM PC virus in the wild was a boot sector virus dubbed (c)Brain, created in 1986 by the Farooq Alvi brothers in Pakistan. Malware distributors would trick the user into booting or running from an infected device or medium. For example, a virus could make an infected computer add autorunnable code to any USB stick plugged into it. Anyone who then attached the stick to another computer set to autorun from USB would in turn become infected, and also pass on the infection in the same way. Older email software would automatically open HTML email containing potentially malicious JavaScript code. Users may also execute disguised malicious email attachments. The 2018 Data Breach Investigations Report by Verizon, cited by CSO Online, states that emails are the primary method of malware delivery, accounting for 96% of malware delivery around the world. The first worms, network-borne infectious programs, originated not on personal computers, but on multitasking Unix systems. The first well-known worm was the Morris worm of 1988, which infected SunOS and VAX BSD systems. Unlike a virus, this worm did not insert itself into other programs. Instead, it exploited security holes (vulnerabilities) in network server programs and started itself running as a separate process. This same behavior is used by today's worms as well. With the rise of the Microsoft Windows platform in the 1990s, and the flexible macros of its applications, it became possible to write infectious code in the macro language of Microsoft Word and similar programs. These macro viruses infect documents and templates rather than applications (executables), but rely on the fact that macros in a Word document are a form of executable code. Many early infectious programs, including the Morris Worm, the first internet worm, were written as experiments or pranks. Today, malware is used by both black hat hackers and governments to steal personal, financial, or business information. Today, any device that plugs into a USB port – even lights, fans, speakers, toys, or peripherals such as a digital microscope – can be used to spread malware. Devices can be infected during manufacturing or supply if quality control is inadequate. Purposes Since the rise of widespread broadband Internet access, malicious software has more frequently been designed for profit. Since 2003, the majority of widespread viruses and worms have been designed to take control of users' computers for illicit purposes. Infected "zombie computers" can be used to send email spam, to host contraband data such as child pornography, or to engage in distributed denial-of-service attacks as a form of extortion. Malware is used broadly against government or corporate websites to gather sensitive information, or to disrupt their operation in general. Further, malware can be used against individuals to gain information such as personal identification numbers or details, bank or credit card numbers, and passwords. In addition to criminal money-making, malware can be used for sabotage, often for political motives. Stuxnet, for example, was designed to disrupt very specific industrial equipment. There have been politically motivated attacks which spread over and shut down large computer networks, including massive deletion of files and corruption of master boot records, described as "computer killing." Such attacks were made on Sony Pictures Entertainment (25 November 2014, using malware known as Shamoon or W32.Disttrack) and Saudi Aramco (August 2012). Types Malware can be classified in numerous ways, and certain malicious programs may fall into two or more categories simultaneously. Broadly, software can categorised into three types: (i) goodware; (ii) greyware and (iii) malware. Malware Virus A computer virus is software usually hidden within another seemingly innocuous program that can produce copies of itself and insert them into other programs or files, and that usually performs a harmful action (such as destroying data). They have been likened to biological viruses. An example of this is a portable execution infection, a technique, usually used to spread malware, that inserts extra data or executable code into PE files. A computer virus is software that embeds itself in some other executable software (including the operating system itself) on the target system without the user's knowledge and consent and when it is run, the virus is spread to other executable files. Worm A worm is a stand-alone malware software that transmits itself over a network to infect other computers and can copy itself without infecting files. These definitions lead to the observation that a virus requires the user to run an infected software or operating system for the virus to spread, whereas a worm spreads itself. Rootkits Once malicious software is installed on a system, it is essential that it stays concealed, to avoid detection. Software packages known as rootkits allow this concealment, by modifying the host's operating system so that the malware is hidden from the user. Rootkits can prevent a harmful process from being visible in the system's list of processes, or keep its files from being read. Some types of harmful software contain routines to evade identification and/or removal attempts, not merely to hide themselves. An early example of this behavior is recorded in the Jargon File tale of a pair of programs infesting a Xerox CP-V time sharing system: Backdoors A backdoor is a broad term for a computer program that allows an attacker persistent unauthorised remote access to a victim's machine often without their knowledge. The attacker typically uses another attack (such as a trojan, worm or virus) to bypass authentication mechanisms usually over an unsecured network such as the Internet to install the backdoor application. A backdoor can also be a side effect of a software bug in legitimate software that is exploited by an attacker to gain access to a victim's computer or network. The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide technical support for customers, but this has never been reliably verified. It was reported in 2014 that US government agencies had been diverting computers purchased by those considered "targets" to secret workshops where software or hardware permitting remote access by the agency was installed, considered to be among the most productive operations to obtain access to networks around the world. Backdoors may be installed by Trojan horses, worms, implants, or other methods. Trojan horse A Trojan horse misrepresents itself to masquerade as a regular, benign program or utility in order to persuade a victim to install it. A Trojan horse usually carries a hidden destructive function that is activated when the application is started. The term is derived from the Ancient Greek story of the Trojan horse used to invade the city of Troy by stealth. Trojan horses are generally spread by some form of social engineering, for example, where a user is duped into executing an email attachment disguised to be unsuspicious, (e.g., a routine form to be filled in), or by drive-by download. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller (phoning home) which can then have unauthorized access to the affected computer, potentially installing additional software such as a keylogger to steal confidential information, cryptomining software or adware to generate revenue to the operator of the trojan. While Trojan horses and backdoors are not easily detectable by themselves, computers may appear to run slower, emit more heat or fan noise due to heavy processor or network usage, as may occur when cryptomining software is installed. Cryptominers may limit resource usage and/or only run during idle times in an attempt to evade detection. Unlike computer viruses and worms, Trojan horses generally do not attempt to inject themselves into other files or otherwise propagate themselves. In spring 2017, Mac users were hit by the new version of Proton Remote Access Trojan (RAT) trained to extract password data from various sources, such as browser auto-fill data, the Mac-OS keychain, and password vaults. Droppers Droppers are a sub-type of Trojans that solely aim to deliver malware upon the system that they infect with the desire to subvert detection through stealth and a light payload. It is important not to confuse a dropper with a loader or stager. A loader or stager will merely load an extension of the malware (for example a collection of malicious functions through reflective dynamic link library injection) into memory. The purpose is to keep the initial stage light and undetectable. A dropper merely downloads further malware to the system. Ransomware Ransomware prevents a user from accessing their files until a ransom is paid. There are two variations of ransomware, being crypto ransomware and locker ransomware. Locker ransomware just locks down a computer system without encrypting its contents, whereas crypto ransomware locks down a system and encrypts its contents. For example, programs such as CryptoLocker encrypt files securely, and only decrypt them on payment of a substantial sum of money. Lock-screens, or screen lockers is a type of "cyber police" ransomware that blocks screens on Windows or Android devices with a false accusation in harvesting illegal content, trying to scare the victims into paying up a fee. Jisut and SLocker impact Android devices more than other lock-screens, with Jisut making up nearly 60 percent of all Android ransomware detections. Encryption-based ransomware, like the name suggests, is a type of ransomware that encrypts all files on an infected machine. These types of malware then display a pop-up informing the user that their files have been encrypted and that they must pay (usually in Bitcoin) to recover them. Some examples of encryption-based ransomware are CryptoLocker and WannaCry. Click Fraud Some malware is used to generate money by click fraud, making it appear that the computer user has clicked an advertising link on a site, generating a payment from the advertiser. It was estimated in 2012 that about 60 to 70% of all active malware used some kind of click fraud, and 22% of all ad-clicks were fraudulent. Grayware Grayware is any unwanted application or file that can worsen the performance of computers and may cause security risks but which there is insufficient consensus or data to classify them as malware. Types of greyware typically includes spyware, adware, fraudulent dialers, joke programs ("jokeware") and remote access tools. For example, at one point, Sony BMG compact discs silently installed a rootkit on purchasers' computers with the intention of preventing illicit copying. Potentially unwanted program Potentially unwanted programs (PUPs) are applications that would be considered unwanted despite often being intentionally downloaded by the user. PUPs include spyware, adware, and fraudulent dialers. Many security products classify unauthorised key generators as PUPs, although they frequently carry true malware in addition to their ostensible purpose. In fact, Kammerstetter et al. (2012) estimated that as much as 55% of key generators could contain malware and that about 36% malicious key generators were not detected by antivirus software. Adware Some types of adware turn off anti-malware and virus protection; technical remedies are available. Spyware Programs designed to monitor users' web browsing, display unsolicited advertisements, or redirect affiliate marketing revenues are called spyware. Spyware programs do not spread like viruses; instead they are generally installed by exploiting security holes. They can also be hidden and packaged together with unrelated user-installed software. The Sony BMG rootkit was intended to prevent illicit copying; but also reported on users' listening habits, and unintentionally created extra security vulnerabilities. Detection Antivirus software typically uses two techniques to detect malware: (i) static analysis and (ii) dynamic/heuristic analysis. Static analysis involves studying the software code of a potentially malicious program and producing a signature of that program. This information is then used to compare scanned files by an antivirus program. Because this approach is not useful for malware that has not yet been studied, antivirus software can use dynamic analysis to monitor how the program runs on a computer and block it if it performs unexpected activity. The aim of any malware is to conceal itself from detection by users or antivirus software. Detecting potential malware is difficult for two reasons. The first is that it is difficult to determine if software is malicious. The second is that malware uses technical measures to make it more difficult to detect it. An estimated 33% of malware is not detected by antivirus software. The most commonly employed anti-detection technique involves encrypting the malware payload in order to prevent antivirus software from recognizing the signature. Tools such as crypters come with an encrypted blob of malicious code and a decryption stub. The stub decrypts the blob and loads it into memory. Because antivirus does not typically scan memory and only scans files on the drive, this allows the malware to evade detection. Advanced malware has the ability to transform itself into different variations, making it less likely to be detected due to the differences in its signatures. This is known as polymorphic malware. Other common techniques used to evade detection include, from common to uncommon: (1) evasion of analysis and detection by fingerprinting the environment when executed; (2) confusing automated tools' detection methods. This allows malware to avoid detection by technologies such as signature-based antivirus software by changing the server used by the malware; (3) timing-based evasion. This is when malware runs at certain times or following certain actions taken by the user, so it executes during certain vulnerable periods, such as during the boot process, while remaining dormant the rest of the time; (4) obfuscating internal data so that automated tools do not detect the malware; (v) information hiding techniques, namely stegomalware; and (5) fileless malware which runs within memory instead of using files and utilizes existing system tools to carry out malicious acts. The use of existing binaries to carry out malicious activities is a technique known as LotL, or Living off the Land. This reduces the amount of forensic artifacts available to analyze. Recently these types of attacks have become more frequent with a 432% increase in 2017 and makeup 35% of the attacks in 2018. Such attacks are not easy to perform but are becoming more prevalent with the help of exploit-kits. Risks Vulnerable software A vulnerability is a weakness, flaw or software bug in an application, a complete computer, an operating system, or a computer network that is exploited by malware to bypass defences or gain privileges it requires to run. For example, TestDisk 6.4 or earlier contained a vulnerability that allowed attackers to inject code into Windows. Malware can exploit security defects (security bugs or vulnerabilities) in the operating system, applications (such as browsers, e.g. older versions of Microsoft Internet Explorer supported by Windows XP), or in vulnerable versions of browser plugins such as Adobe Flash Player, Adobe Acrobat or Reader, or Java SE. For example, a common method is exploitation of a buffer overrun vulnerability, where software designed to store data in a specified region of memory does not prevent more data than the buffer can accommodate from being supplied. Malware may provide data that overflows the buffer, with malicious executable code or data after the end; when this payload is accessed it does what the attacker, not the legitimate software, determines. Malware can exploit recently discovered vulnerabilities before developers have had time to release a suitable patch. Even when new patches addressing the vulnerability have been released, they may not necessarily be installed immediately, allowing malware to take advantage of systems lacking patches. Sometimes even applying patches or installing new versions does not automatically uninstall the old versions. There are several ways the users can stay informed and protected from security vulnerabilities in software. Software providers often announce updates that address security issues. Common vulnerabilities are assigned unique identifiers (CVE IDs) and listed in public databases like the National Vulnerability Database. Tools like Secunia PSI, free for personal use, can scan a computer for outdated software with known vulnerabilities and attempt to update them. Firewalls and intrusion prevention systems can monitor the network traffic for suspicious activity that might indicate an attack. Excessive privileges Users and programs can be assigned more privileges than they require, and malware can take advantage of this. For example, of 940 Android apps sampled, one third of them asked for more privileges than they required. Apps targeting the Android platform can be a major source of malware infection but one solution is to use third-party software to detect apps that have been assigned excessive privileges. Some systems allow all users to make changes to the core components or settings of the system, which is considered over-privileged access today. This was the standard operating procedure for early microcomputer and home computer systems, where there was no distinction between an administrator or root, and a regular user of the system. In some systems, non-administrator users are over-privileged by design, in the sense that they are allowed to modify internal structures of the system. In some environments, users are over-privileged because they have been inappropriately granted administrator or equivalent status. This can be because users tend to demand more privileges than they need, so often end up being assigned unnecessary privileges. Some systems allow code executed by a user to access all rights of that user, which is known as over-privileged code. This was also standard operating procedure for early microcomputer and home computer systems. Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular operating systems, and also many scripting applications allow code too many privileges, usually in the sense that when a user executes code, the system allows that code all rights of that user. Weak passwords A credential attack occurs when a user account with administrative privileges is cracked and that account is used to provide malware with appropriate privileges. Typically, the attack succeeds because the weakest form of account security is used, which is typically a short password that can be cracked using a dictionary or brute force attack. Using strong passwords and enabling two-factor authentication can reduce this risk. With the latter enabled, even if an attacker can crack the password, they cannot use the account without also having the token possessed by the legitimate user of that account. Use of the same operating system Homogeneity can be a vulnerability. For example, when all computers in a network run the same operating system, upon exploiting one, one worm can exploit them all: In particular, Microsoft Windows or Mac OS X have such a large share of the market that an exploited vulnerability concentrating on either operating system could subvert a large number of systems. It is estimated that approximately 83% of malware infections between January and March 2020 were spread via systems running Windows 10. This risk is mitigated by segmenting the networks into different subnetworks and setting up firewalls to block traffic between them. Mitigation Antivirus / Anti-malware software Anti-malware (sometimes also called antivirus) programs block and remove some or all types of malware. For example, Microsoft Security Essentials (for Windows XP, Vista, and Windows 7) and Windows Defender (for Windows 8, 10 and 11) provide real-time protection. The Windows Malicious Software Removal Tool removes malicious software from the system. Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Tests found some free programs to be competitive with commercial ones. Typically, antivirus software can combat malware in the following ways: Real-time protection: They can provide real time protection against the installation of malware software on a computer. This type of malware protection works the same way as that of antivirus protection in that the anti-malware software scans all incoming network data for malware and blocks any threats it comes across. Removal: Anti-malware software programs can be used solely for detection and removal of malware software that has already been installed onto a computer. This type of anti-malware software scans the contents of the Windows registry, operating system files, and installed programs on a computer and will provide a list of any threats found, allowing the user to choose which files to delete or keep, or to compare this list to a list of known malware components, removing files that match. Sandboxing: Sandboxing confines applications within a controlled environment, restricting their operations and isolating them from other applications on the host while limiting access to system resources. Browser sandboxing isolates web processes to prevent malware and exploits, enhancing security. Real-time protection A specific component of anti-malware software, commonly referred to as an on-access or real-time scanner, hooks deep into the operating system's core or kernel and functions in a manner similar to how certain malware itself would attempt to operate, though with the user's informed permission for protecting the system. Any time the operating system accesses a file, the on-access scanner checks if the file is infected or not. Typically, when an infected file is found, execution is stopped and the file is quarantined to prevent further damage with the intention to prevent irreversible system damage. Most AVs allow users to override this behaviour. This can have a considerable performance impact on the operating system, though the degree of impact is dependent on how many pages it creates in virtual memory. Sandboxing Sandboxing is a security model that confines applications within a controlled environment, restricting their operations to authorized "safe" actions and isolating them from other applications on the host. It also limits access to system resources like memory and the file system to maintain isolation. Browser sandboxing is a security measure that isolates web browser processes and tabs from the operating system to prevent malicious code from exploiting vulnerabilities. It helps protect against malware, zero-day exploits, and unintentional data leaks by trapping potentially harmful code within the sandbox. It involves creating separate processes, limiting access to system resources, running web content in isolated processes, monitoring system calls, and memory constraints. Inter-process communication (IPC) is used for secure communication between processes. Escaping the sandbox involves targeting vulnerabilities in the sandbox mechanism or the operating system's sandboxing features. While sandboxing is not foolproof, it significantly reduces the attack surface of common threats. Keeping browsers and operating systems updated is crucial to mitigate vulnerabilities. Website security scans Website vulnerability scans check the website, detect malware, may note outdated software, and may report known security issues, in order to reduce the risk of the site being compromised. Network Segregation Structuring a network as a set of smaller networks, and limiting the flow of traffic between them to that known to be legitimate, can hinder the ability of infectious malware to replicate itself across the wider network. Software-defined networking provides techniques to implement such controls. "Air gap" isolation or "parallel network" As a last resort, computers can be protected from malware, and the risk of infected computers disseminating trusted information can be greatly reduced by imposing an "air gap" (i.e. completely disconnecting them from all other networks) and applying enhanced controls over the entry and exit of software and data from the outside world. However, malware can still cross the air gap in some situations, not least due to the need to introduce software into the air-gapped network and can damage the availability or integrity of assets thereon. Stuxnet is an example of malware that is introduced to the target environment via a USB drive, causing damage to processes supported on the environment without the need to exfiltrate data. AirHopper, BitWhisper, GSMem and Fansmitter are four techniques introduced by researchers that can leak data from air-gapped computers using electromagnetic, thermal and acoustic emissions. Research Utilizing bibliometric analysis, the study of malware research trends from 2005 to 2015, considering criteria such as impact journals, highly cited articles, research areas, number of publications, keyword frequency, institutions, and authors, revealed an annual growth rate of 34.1%. North America led in research output, followed by Asia and Europe. China and India were identified as emerging contributors.
Technology
Basics_3
null
20911
https://en.wikipedia.org/wiki/Multiverse
Multiverse
The multiverse is the hypothetical set of all universes. Together, these universes are presumed to comprise everything that exists: the entirety of space, time, matter, energy, information, and the physical laws and constants that describe them. The different universes within the multiverse are called "parallel universes", "flat universes", "other universes", "alternate universes", "multiple universes", "plane universes", "parent and child universes", "many universes", or "many worlds". One common assumption is that the multiverse is a "patchwork quilt of separate universes all bound by the same laws of physics." The concept of multiple universes, or a multiverse, has been discussed throughout history, including Greek philosophy. It has evolved and has been debated in various fields, including cosmology, physics, and philosophy. Some physicists argue that the multiverse is a philosophical notion rather than a scientific hypothesis, as it cannot be empirically falsified. In recent years, there have been proponents and skeptics of multiverse theories within the physics community. Although some scientists have analyzed data in search of evidence for other universes, no statistically significant evidence has been found. Critics argue that the multiverse concept lacks testability and falsifiability, which are essential for scientific inquiry, and that it raises unresolved metaphysical issues. Max Tegmark and Brian Greene have proposed different classification schemes for multiverses and universes. Tegmark's four-level classification consists of Level I: an extension of our universe, Level II: universes with different physical constants, Level III: many-worlds interpretation of quantum mechanics, and Level IV: ultimate ensemble. Brian Greene's nine types of multiverses include quilted, inflationary, brane, cyclic, landscape, quantum, holographic, simulated, and ultimate. The ideas explore various dimensions of space, physical laws, and mathematical structures to explain the existence and interactions of multiple universes. Some other multiverse concepts include twin-world models, cyclic theories, M-theory, and black-hole cosmology. The anthropic principle suggests that the existence of a multitude of universes, each with different physical laws, could explain the asserted appearance of fine-tuning of our own universe for conscious life. The weak anthropic principle posits that we exist in one of the few universes that support life. Debates around Occam's razor and the simplicity of the multiverse versus a single universe arise, with proponents like Max Tegmark arguing that the multiverse is simpler and more elegant. The many-worlds interpretation of quantum mechanics and modal realism, the belief that all possible worlds exist and are as real as our world, are also subjects of debate in the context of the anthropic principle. History of the concept According to some, the idea of infinite worlds was first suggested by the pre-Socratic Greek philosopher Anaximander in the sixth century BCE. However, there is debate as to whether he believed in multiple worlds, and if he did, whether those worlds were co-existent or successive. The first to whom we can definitively attribute the concept of innumerable worlds are the Ancient Greek Atomists, beginning with Leucippus and Democritus in the 5th century BCE, followed by Epicurus (341–270 BCE) and Lucretius (1st century BCE). In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages. The American philosopher and psychologist William James used the term "multiverse" in 1895, but in a different context. The concept first appeared in the modern scientific context in the course of the debate between Boltzmann and Zermelo in 1895. In Dublin in 1952, Erwin Schrödinger gave a lecture in which he jocularly warned his audience that what he was about to say might "seem lunatic". He said that when his equations seemed to describe several different histories, these were "not alternatives, but all really happen simultaneously". This sort of duality is called "superposition". Search for evidence In the 1990s, after recent works of fiction about the concept gained popularity, scientific discussions about the multiverse and journal articles about it gained prominence. Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find evidence suggesting that this universe collided with other (parallel) universes in the distant past. However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution three times higher than WMAP, did not reveal any statistically significant evidence of such a bubble universe collision. In addition, there was no evidence of any gravitational pull of other universes on ours. In 2015, an astrophysicist may have found evidence of alternate or parallel universes by looking back in time to a time immediately after the Big Bang, although it is still a matter of debate among physicists. Dr. Ranga-Ram Chary, after analyzing the cosmic radiation spectrum, found a signal 4,500 times brighter than it should have been, based on the number of protons and electrons scientists believe existed in the very early universe. This signal—an emission line that arose from the formation of atoms during the era of recombination—is more consistent with a universe whose ratio of matter particles to photons is about 65 times greater than our own. There is a 30% chance that this signal is noise, and not really a signal at all; however, it is also possible that it exists because a parallel universe dumped some of its matter particles into our universe. If additional protons and electrons had been added to our universe during recombination, more atoms would have formed, more photons would have been emitted during their formation, and the signature line that arose from all of these emissions would be greatly enhanced. Chary himself is skeptical: Chary also noted: The signature that Chary has isolated may be a consequence of incoming light from distant galaxies, or even from clouds of dust surrounding our own galaxy. Proponents and skeptics Modern proponents of one or more of the multiverse hypotheses include Lee Smolin, Don Page, Brian Greene, Max Tegmark, Alan Guth, Andrei Linde, Michio Kaku, David Deutsch, Leonard Susskind, Alexander Vilenkin, Yasunori Nomura, Raj Pathria, Laura Mersini-Houghton, Neil deGrasse Tyson, Sean Carroll and Stephen Hawking. Scientists who are generally skeptical of the concept of a multiverse or popular multiverse hypotheses include Sabine Hossenfelder, David Gross, Paul Steinhardt, Anna Ijjas, Abraham Loeb, David Spergel, Neil Turok, Viatcheslav Mukhanov, Michael S. Turner, Roger Penrose, George Ellis, Joe Silk, Carlo Rovelli, Adam Frank, Marcelo Gleiser, Jim Baggott and Paul Davies. Arguments against multiverse hypotheses In his 2003 New York Times opinion piece, "A Brief History of the Multiverse", author and cosmologist Paul Davies offered a variety of arguments that multiverse hypotheses are non-scientific: George Ellis, writing in August 2011, provided a criticism of the multiverse, and pointed out that it is not a traditional scientific theory. He accepts that the multiverse is thought to exist far beyond the cosmological horizon. He emphasized that it is theorized to be so far away that it is unlikely any evidence will ever be found. Ellis also explained that some theorists do not believe the lack of empirical testability and falsifiability is a major concern, but he is opposed to that line of thinking: Ellis says that scientists have proposed the idea of the multiverse as a way of explaining the nature of existence. He points out that it ultimately leaves those questions unresolved because it is a metaphysical issue that cannot be resolved by empirical science. He argues that observational testing is at the core of science and should not be abandoned: Philosopher Philip Goff argues that the inference of a multiverse to explain the apparent fine-tuning of the universe is an example of Inverse Gambler's Fallacy. Stoeger, Ellis, and Kircher note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support". In May 2020, astrophysicist Ethan Siegel expressed criticism in a Forbes blog post that parallel universes would have to remain a science fiction dream for the time being, based on the scientific evidence available to us. Scientific American contributor John Horgan also argues against the idea of a multiverse, claiming that they are "bad for science." Types Max Tegmark and Brian Greene have devised classification schemes for the various theoretical types of multiverses and universes that they might comprise. Max Tegmark's four levels Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The four levels of Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels. They are briefly described below. Level I: An extension of our universe A prediction of cosmic inflation is the existence of an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions. Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us. Given infinite space, there would be an infinite number of Hubble volumes identical to ours in the universe. This follows directly from the cosmological principle, wherein it is assumed that our Hubble volume is not special or unique. Level II: Universes with different physical constants In the eternal inflation theory, which is a variant of the cosmic inflation theory, the multiverse or space as a whole is stretching and will continue doing so forever, but some regions of space stop stretching and form distinct bubbles (like gas pockets in a loaf of rising bread). Such bubbles are embryonic level I multiverses. Different bubbles may experience different spontaneous symmetry breaking, which results in different properties, such as different physical constants. Level II also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory. Level III: Many-worlds interpretation of quantum mechanics Hugh Everett III's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics. In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different "world" within the Universal wavefunction, with each world as real as ours. Suppose a six-sided dice is thrown and that the result of the throw corresponds to observable quantum mechanics. All six possible ways the dice can fall correspond to six different worlds. In the case of the Schrödinger's cat thought experiment, both outcomes would be "real" in at least one "world". Tegmark argues that a Level III multiverse does not contain more possibilities in the Hubble volume than a Level I or Level II multiverse. In effect, all the different worlds created by "splits" in a Level III multiverse with the same physical constants can be found in some Hubble volume in a Level I multiverse. Tegmark writes that, "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space." Similarly, all Level II bubble universes with different physical constants can, in effect, be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a Level III multiverse. According to Yasunori Nomura, Raphael Bousso, and Leonard Susskind, this is because global spacetime appearing in the (eternally) inflating multiverse is a redundant concept. This implies that the multiverses of Levels I, II, and III are, in fact, the same thing. This hypothesis is referred to as "Multiverse = Quantum Many Worlds". According to Yasunori Nomura, this quantum multiverse is static, and time is a simple illusion. Another version of the many-worlds idea is H. Dieter Zeh's many-minds interpretation. Level IV: Ultimate ensemble The ultimate mathematical universe hypothesis is Tegmark's own hypothesis. This level considers all universes to be equally real which can be described by different mathematical structures. Tegmark writes: He argues that this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be, say, a Level V." Jürgen Schmidhuber, however, says that the set of mathematical structures is not even well-defined and that it admits only universe representations describable by constructive mathematics—that is, computer programs. Schmidhuber explicitly includes universe representations describable by non-halting programs whose output bits converge after a finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem. He also explicitly discusses the more restricted ensemble of quickly computable universes. Brian Greene's nine types The American theoretical physicist and string theorist Brian Greene discussed nine types of multiverses: Quilted The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas. Inflationary The inflationary multiverse is composed of various pockets in which inflation fields collapse and form new universes. Brane The brane multiverse version postulates that our entire universe exists on a membrane (brane) which floats in a higher dimension or "bulk". In this bulk, there are other membranes with their own universes. These universes can interact with one another, and when they collide, the violence and energy produced is more than enough to give rise to a Big Bang. The branes float or drift near each other in the bulk, and every few trillion years, attracted by gravity or some other force we do not understand, collide and bang into each other. This repeated contact gives rise to multiple or "cyclic" Big Bangs. This particular hypothesis falls under the string theory umbrella as it requires extra spatial dimensions. Cyclic The cyclic multiverse has multiple branes that have collided, causing Big Bangs. The universes bounce back and pass through time until they are pulled back together and again collide, destroying the old contents and creating them anew. Landscape The landscape multiverse relies on string theory's Calabi–Yau spaces. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a set of laws different from that of the surrounding space. Quantum The quantum multiverse creates a new universe when a diversion in events occurs, as in the real-worlds variant of the many-worlds interpretation of quantum mechanics. Holographic The holographic multiverse is derived from the theory that the surface area of a space can encode the contents of the volume of the region. Simulated The simulated multiverse exists on complex computer systems that simulate entire universes. A related hypothesis, as put forward as a possibility by astronomer Avi Loeb, is that universes may be creatable in laboratories of advanced technological civilizations who have a theory of everything. Other related hypotheses include brain in a vat-type scenarios where the perceived universe is either simulated in a low-resource way or not perceived directly by the virtual/simulated inhabitant species. Ultimate The ultimate multiverse contains every mathematically possible universe under different laws of physics. Twin-world models There are models of two related universes that e.g. attempt to explain the baryon asymmetry – why there was more matter than antimatter at the beginning – with a mirror anti-universe. One two-universe cosmological model could explain the Hubble constant (H0) tension via interactions between the two worlds. The "mirror world" would contain copies of all existing fundamental particles. Another twin/pair-world or "bi-world" cosmology is shown to theoretically be able to solve the cosmological constant (Λ) problem, closely related to dark energy: two interacting worlds with a large Λ each could result in a small shared effective Λ. Cyclic theories In several theories, there is a series of, in some cases infinite, self-sustaining cycles – typically a series of Big Crunches (or Big Bounces). However, the respective universes do not exist at once but are forming or following in a logical order or sequence, with key natural constituents potentially varying between universes (see § Anthropic principle). M-theory A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory. These theories require the presence of 10 or 11 spacetime dimensions respectively. The extra six or seven dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D3-brane. This opens up the possibility that there are other branes which could support other universes. Black-hole cosmology Black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many universes inside a larger universe. This includes the theory of white holes, which are on the opposite side of space-time. Anthropic principle The concept of other universes has been proposed to explain how our own universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), then some of these universes (even if very few) would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve. The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life), those conditions do not require intelligent design as an explanation for the conditions in the Universe that promote our existence in it. An early form of this reasoning is evident in Arthur Schopenhauer's 1844 work "Von der Nichtigkeit und dem Leiden des Lebens", where he argues that our world must be the worst of all possible worlds, because if it were significantly worse in any respect it could not continue to exist. Occam's razor Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate an almost infinite number of unobservable universes, just to explain our own universe, is contrary to Occam's razor. However, proponents argue that in terms of Kolmogorov complexity the proposed multiverse is simpler than a single idiosyncratic universe. For example, multiverse proponent Max Tegmark argues: Possible worlds and real worlds In any given set of possible universes – e.g. in terms of histories or variables of nature – not all may be ever realized, and some may be realized many times. For example, over infinite time there could, in some potential theories, be infinite universes, but only a small or relatively small real number of universes where humanity could exist and only one where it ever does exist (with a unique history). It has been suggested that a universe that "contains life, in the form it has on Earth, is in a certain sense radically non-ergodic, in that the vast majority of possible organisms will never be realized". On the other hand, some scientists, theories and popular works conceive of a multiverse in which the universes are so similar that humanity exists in many equally real separate universes but with varying histories. There is a debate about whether the other worlds are real in the many-worlds interpretation (MWI) of quantum mechanics. In Quantum Darwinism one does not need to adopt a MWI in which all of the branches are equally real. Modal realism Possible worlds are a way of explaining probability and hypothetical statements. Some philosophers, such as David Lewis, posit that all possible worlds exist and that they are just as real as the world we live in. This position is known as modal realism.
Physical sciences
Physical cosmology
Astronomy
20922
https://en.wikipedia.org/wiki/Molotov%20cocktail
Molotov cocktail
A Molotov cocktail (among several other names – see ) is a hand-thrown incendiary weapon consisting of a frangible container filled with flammable substances and equipped with a fuse (typically a glass bottle filled with flammable liquids sealed with a cloth wick). In use, the fuse attached to the container is lit and the weapon is thrown, shattering on impact. This ignites the flammable substances contained in the bottle and spreads flames as the fuel burns. Due to their relative ease of production, Molotov cocktails are typically improvised weapons. Their improvised usage spans criminals, gangsters, rioters, football hooligans, urban guerrillas, terrorists, irregular soldiers, freedom fighters, and even regular soldiers; usage in the latter case is often due to a shortage of equivalent military-issued munitions. Despite the weapon's improvised nature and uncertain quality, many modern militaries exercise the use of Molotov cocktails. However, Molotov cocktails are not always improvised in the field. It is not uncommon for them to be mass-produced to a certain standard as part of preparation for combat. Some examples of this being done are the anti-invasion preparations of the British Home Guard during World War II and the Ukrainian volunteer units during the 2022 Russian invasion of Ukraine. During World War II, Molotov cocktails were even factory produced in several countries, such as Finland, Nazi Germany, the Soviet Union, Sweden, and the United States; some featuring specially designed frangible containers and fuses (such as the US Frangible Grenade M1 for example). Etymology The name "Molotov cocktail" () was coined by the Finns during the Winter War in 1939. The name was a pejorative reference to Soviet foreign minister Vyacheslav Molotov, who was one of the architects of the Molotov–Ribbentrop Pact on the eve of World War II. The name's origin came from the propaganda Molotov produced during the Winter War, mainly his declaration on Soviet state radio that incendiary bombing missions over Finland were actually "airborne humanitarian food deliveries" for their "starving" neighbours. As a result, the Finns sarcastically dubbed the Soviet incendiary cluster bombs "Molotov bread baskets" () in reference to Molotov's propaganda broadcasts. When the hand-held bottle firebomb was developed to attack and destroy Soviet tanks, the Finns called it the "Molotov cocktail", as "a drink to go with his food parcels". Despite the now infamous name, the formal Finnish military term for the weapon type was, and continues to be, "burn-bottle" (, Fenno-Swedish: brännflaska). Other names The weapon most often known as the Molotov cocktail goes under a great variety of other names around the globe. Some are more formal than others but the weapon is often given a descriptive name in the respective language. Synonyms and nicknames Bottle bomb Bottle grenade Burn bottle Burning bottle Fire bomb (not to be confused with other incendiary devices also known as firebombs) Fire bottle Flame bomb Flame bottle Gasoline bomb or Gas bomb – due to gasoline being a common filler (latter not to be confused with tear gas) Incendiary bottle Molly – abbreviation of Molotov cocktail (commonly used in video games) Molotov – abbreviation of Molotov cocktail Petrol bomb – due to petrol being a common filler, used often in Northern Ireland Poor man's grenade – due to its improvised nature ("punch"), ("poncho") Military nomenclature – , ('burn-bottle') – Fenno-Swedish: ('burn-bottle') – – ('fire bottle') – ('fire hand grenade') – – – ('burn-bottle') – frangible grenade – incendiary frangible grenade – incendiary bottle grenade Design A Molotov cocktail is a glass bottle containing a flammable substance such as petrol (gasoline), alcohol or a napalm-like mixture and a source of ignition, such as a burning cloth wick, held in place by the bottle's stopper. The wick is usually soaked in alcohol or kerosene rather than petrol. For winter warfare, a method of ignition has been to attach storm matches to the side of the bottle, as these are less likely to be put out by wind. Some examples are fitted with ballast for improved throwing accuracy (such as filling of the bottle with sand). In action, the wick/match is lit and the bottle hurled at a target such as a vehicle or fortification. When the bottle smashes on impact, the ensuing cloud of fuel droplets and vapour is ignited by the attached wick, causing an immediate fireball followed by spreading flames as the remainder of the fuel is consumed. Other flammable liquids, such as diesel fuel, methanol, turpentine, jet fuel, acetone, and isopropyl alcohol (rubbing alcohol), have been used in place of, or combined with, petrol. Thickening agents, such as solvents, extruded polystyrene (XPS) foam (known colloquially as styrofoam), baking soda, petroleum jelly, tar, strips of tyre tubing, nitrocellulose, motor oil, rubber cement, detergent and dish soap, have been added to promote adhesion of the burning liquid and to create clouds of thick, choking smoke. There also exist variations on the Molotov cocktail-concept where the bottle is filled with a smoke generating mixture such as sulfur trioxide dissolved in chlorosulfonic acid. These so-called "smoke bottles" do not need a source for ignition, as the mixture reacts with the air once the bottle is smashed. Development and use in war Spanish Civil War Improvised incendiary devices of this type were used in warfare for the first time in the Spanish Civil War between July 1936 and April 1939, before they became known as "Molotov cocktails". In 1936, General Francisco Franco ordered Spanish Nationalist forces to use the weapon against Soviet T-26 tanks supporting the Spanish Republicans in a failed assault on the Nationalist stronghold of Seseña, near Toledo, south of Madrid. After that, both sides used simple petrol bombs set fire with toxic gas or petrol-soaked blankets with some success. Tom Wintringham, a veteran of the International Brigades, later publicised his recommended method of using them: Khalkhin Gol The Battle of Khalkhin Gol, a border conflict of 1939 ostensibly between Mongolia and Manchukuo, saw heavy fighting between Japanese and Soviet forces. Short of anti-tank equipment, Japanese infantry attacked Soviet tanks with gasoline-filled bottles. Japanese infantrymen claimed that several hundred Soviet tanks had been destroyed this way, though Soviet loss records do not support this assessment. World War II Finland On 30 November 1939, the Soviet Union attacked Finland, starting what came to be known as the Winter War. The Finnish perfected the design and tactical use of the petrol bomb. The fuel for the Molotov cocktail was refined to a slightly sticky mixture of alcohol, kerosene, tar, and potassium chlorate. Further refinements included the attachment of wind-proof matches or a phial of chemicals that would ignite on breakage, thereby removing the need to pre-ignite the bottle, and leaving the bottle about one-third empty was found to make breaking more likely. A British War Office report dated June 1940 noted that: Molotov cocktails were eventually mass-produced by the Alko corporation at its Rajamäki distillery, bundled with matches to light them. A bottle was filled with a mixture of petrol and paraffin, plus a small amount of tar. The basic bottle had two long pyrotechnic storm matches attached to either side. Before use, one or both of the matches were lit; when the bottle broke on impact, the mixture ignited. The storm matches were found to be safer to use than a burning rag on the mouth of the bottle. There was also an "A bottle". This replaced the matches with a small ampoule inside the bottle; it ignited when the bottle broke. By spring 1940 they had produced 542,104 bottles. Great Britain Early in 1940, with the prospect of immediate invasion, the possibilities of the petrol bomb gripped the imagination of the British public. For laypersons, the petrol bomb had the benefit of using entirely familiar and available materials, and they were quickly improvised in large numbers, with the intention of using them against enemy tanks. The Finns had found that they were effective when used in the right way and in sufficient numbers. Although the experience of the Spanish Civil War received more publicity, the more sophisticated petroleum warfare tactics of the Finns were not lost on British commanders. In his 5 June address to LDV leaders, General Ironside said: Wintringham advised that a tank that was isolated from supporting infantry was potentially vulnerable to men who had the required determination and cunning to get close. Rifles or even a shotgun would be sufficient to persuade the crew to close all the hatches, and then the view from the tank is very limited; a turret-mounted machine gun has a very slow traverse and cannot hope to fend off attackers coming from all directions. Once sufficiently close, it is possible to hide where the tank's gunner cannot see: "The most dangerous distance away from a tank is 200 yards; the safest distance is six inches." Petrol bombs will soon produce a pall of blinding smoke, and a well-placed explosive package or even a stout iron bar in the tracks can immobilise the vehicle, leaving it at the mercy of further petrol bombs – which will suffocate the engine and possibly the crew – or an explosive charge or anti-tank mine. By August 1940, the War Office produced training instructions for the creation and use of Molotov cocktails. The instructions suggested scoring the bottles vertically with a diamond to ensure breakage and providing fuel-soaked rag, windproof matches or a length of cinema film (then composed of highly flammable nitrocellulose) as a source of ignition. On 29 July 1940, manufacturers Albright & Wilson of Oldbury demonstrated to the RAF how their white phosphorus could be used to ignite incendiary bombs. The demonstration involved throwing glass bottles containing a mixture of petrol and phosphorus at pieces of wood and into a hut. On breaking, the phosphorus was exposed to the air and spontaneously ignited; the petrol also burned, resulting in a fierce fire. Because of safety concerns, the RAF was not interested in white phosphorus as a source of ignition, but the idea of a self-igniting petrol bomb took hold. Initially known as an A.W. bomb, it was officially named the No. 76 Grenade, but more commonly known as the SIP (Self-Igniting Phosphorus) grenade. The perfected list of ingredients was white phosphorus, benzene, water and a two-inch strip of raw rubber; all in a half-pint bottle sealed with a crown stopper. Over time, the rubber would slowly dissolve, making the contents slightly sticky, and the mixture would separate into two layers – this was intentional, and the grenade should not be shaken to mix the layers, as this would only delay ignition. When thrown against a hard surface, the glass would shatter and the contents would instantly ignite, liberating choking fumes of phosphorus pentoxide and sulfur dioxide as well as producing a great deal of heat. Strict instructions were issued to store the grenades safely, preferably underwater and certainly never in a house. Mainly issued to the Home Guard as an anti-tank weapon, it was produced in vast numbers; by August 1941 well over 6,000,000 had been manufactured. There were many who were sceptical about the efficacy of Molotov cocktails and SIPs grenades against the more modern German tanks. Weapon designer Stuart Macrae witnessed a trial of the SIPs grenade at Farnborough: "There was some concern that, if the tank drivers could not pull up quickly enough and hop out, they were likely to be frizzled to death, but after looking at the bottles they said they would be happy to take a chance." The drivers were proved right, trials on modern British tanks confirmed that Molotov and SIP grenades caused the occupants of the tanks "no inconvenience whatsoever." Wintringham, though enthusiastic about improvised weapons, cautioned against a reliance on petrol bombs and repeatedly emphasised the importance of using explosive charges. United States The US army designated Molotov cocktails as frangible grenades. They presented a notable amount of variations, from those that used thin fuel with varied ignition systems, to those that used obscurants and chemical weapons. Various frangible grenade designs were developed, with those investiged by the NDRC showing the highest technological level. These incendiary devices employed the most technologically advanced fillers in the conflict. The M1 frangible grenade was the standard US device, but each division of the army could come up with its own. Two non-industrial models of these grenades were developed and manufactured in a certain quantity. In all, about five thousand were manufactured. The frangible grenades featured standardized chemical igniters, some were specific to each flammable filler. Most of the frangible devices were made in an improvised way, with no standardization regarding the bottle and filling. The frangible grenades were eventually declared obsolete, due to the very limited destructive effect. 1107 frangible, M1, NP type were supplied to the navy and its units for field use at Iwo Jima. The United States Marine Corps developed a version during World War II that used a tube of nitric acid and a lump of metallic sodium to ignite a mixture of petrol and diesel fuel. Other fronts of World War II The Polish Home Army developed a version which ignited on impact without the need of a wick. Ignition was caused by a reaction between concentrated sulfuric acid mixed with the fuel and a mixture of potassium chlorate and sugar which was crystallized from solution onto a rag attached to the bottle. During the Norwegian campaign in 1940 the Norwegian Army lacking suitable anti-tank weaponry had to rely on petrol bombs and other improvised weapons to fight German armored vehicles. Instructions from Norwegian High Command sent to army units in April 1940 encouraged soldiers to start ad-hoc production of "Hitler cocktails" (a different take on the Finnish nickname for the weapon) to combat tanks and armored cars. During the campaign there were instances of petrol bombs being relatively effective against the lighter tanks employed in Norway by Germany, such as the Panzer I and Panzer II. The Troubles During the Troubles, both the Provisional Irish Republican Army (PIRA) and civilians used petrol bombs, although with different uses. Civilians tended to use petrol bombs and rocks against police officers in riots; however, the PIRA tended to use them in attacks rather than in self-defence. Over time, as the PIRA became more co-ordinated, it shifted to using IEDs rather than petrol bombs. Modern warfare During the Second Battle of Fallujah in 2004, U.S. Marines employed Molotov cocktails made with "one part liquid laundry detergent, two parts gas [gasoline]" while clearing houses "when contact is made in a house and the enemy must be burned out". The tactic "was developed in response to the enemy's tactics" of guerrilla warfare and particularly martyrdom tactics which often resulted in U.S. Marine casualties. The cocktail was a less expedient alternative to white phosphorus mortar rounds or propane tanks detonated with C4 (nicknamed the "House Guest"), all of which proved effective at burning out engaged enemy combatants. During the 2022 Russian invasion of Ukraine, the Ukrainian Defense Ministry told civilians to make Molotov cocktails, locally called "Bandera smoothies", to fight Russian troops. The defense ministry distributed a recipe for producing Molotov cocktails to civilians through Ukrainian television, which included the use of styrofoam as a thickening agent to aid in helping the burning liquid stick to vehicles or other targets. The Pravda Brewery of Lviv, which converted from making beer to Molotov cocktails, said that its recipe was "3 cups polystyrene, 2 cups grated soap, 500 millilitres gasoline, 100 millilitres oil, 1 jumbo tampon fuse." The Russian media control organisation Roskomnadzor sued Twitter for not removing instructions for how to prepare and use molotov cocktails, so that Twitter had to pay a fine of 3 million roubles (US$41,000). Civilian use Molotov cocktails were reportedly used in the United States for arson attacks on shops and other buildings during the 1992 Los Angeles riots. Molotov cocktails were used by protesters and civilian militia in Ukraine during Euromaidan and the Revolution of Dignity. Protesters during the Ferguson riots also used Molotov cocktails. In Bangladesh, during anti-government protests in 2013 and 2014, many buses and cars were targeted with petrol bombs. A number of people burned to death and many more were injured due to these attacks. During the 2019–20 Hong Kong protests, riots broke out and Molotov cocktails were used to attack the police and create roadblocks. They were also used to attack an MTR station, causing severe damage. A journalist was also struck by a Molotov cocktail during the protests. Molotov cocktails were used by some during the riots following the 2020 George Floyd protests in the United States. Non-incendiary variants During the 2014–17 Venezuelan protests, protesters used Molotov cocktails similar to those used by demonstrators in other countries. As the 2017 Venezuelan protests intensified, demonstrators began using "Puputovs" (a portmanteau of the words "poo-poo" and "Molotov"), with glass containers filled with excrement being thrown at authorities after the PSUV ruling-party official, Jacqueline Faría, mocked protesters who had to crawl through sewage in Caracas' Guaire River to avoid tear gas. On 8 May, the hashtag #puputov became the top trending hashtag on Twitter in Venezuela, as reports of authorities vomiting after being drenched in excrement began to circulate. A month later, on 4 June 2017, during protests against Donald Trump in Portland, Oregon, police claimed protesters began throwing balloons filled with "unknown, foul-smelling liquid" at officers. Legality As incendiary devices, Molotov cocktails are illegal to manufacture or possess in many regions. In the United States, Molotov cocktails are considered "destructive devices" under the National Firearms Act and are regulated by the ATF. Wil Casey Floyd, from Elkhart Lake, Wisconsin, was arrested after throwing Molotov cocktails at Seattle police officers during a protest in May 2016; he pleaded guilty for using the incendiary devices in February 2018. In Simpson County, Kentucky, 20-year-old Trey Alexander Gwathney-Law attempted to burn Franklin-Simpson County Middle School with five Molotov cocktails; he was found guilty of making and possessing illegal firearms and was sentenced to 20 years in prison in 2018. Symbolism Due to the Molotov's ease of production and use by civilian forces, the Molotov cocktail has become a symbol of civil uprising and revolution. The Molotov's extensive use by civilian, and partisan forces has also thereby led to the Molotov becoming a symbol representing civil unrest. The contrast of a Molotov cocktail and an organized force has become a popular symbol in popular culture, and is often utilized as a weapon in various video games. Gallery
Technology
Incendiary weapons
null
20947
https://en.wikipedia.org/wiki/Adobe%20Flash
Adobe Flash
Adobe Flash (formerly Macromedia Flash and FutureSplash) is a discontinued multimedia software platform used for production of animations, rich internet applications, desktop applications, mobile apps, mobile games, and embedded web browser video players. About Flash displays text, vector graphics, and raster graphics to provide animations, video games, and applications. It allows streaming of audio and video, and can capture mouse, keyboard, microphone, and camera input. Artists may produce Flash graphics and animations using Adobe Animate (formerly known as Adobe Flash Professional). Software developers may produce applications and video games using Adobe Flash Builder, FlashDevelop, Flash Catalyst, or any text editor combined with the Apache Flex SDK. End users view Flash content via Flash Player (for web browsers), Adobe AIR (for desktop or mobile apps), or third-party players such as Scaleform (for video games). Adobe Flash Player (which is available on Microsoft Windows, macOS, and Linux) enables end users to view Flash content using web browsers. Adobe Flash Lite enabled viewing Flash content on older smartphones, but since has been discontinued and superseded by Adobe AIR. The ActionScript programming language allows the development of interactive animations, video games, web applications, desktop applications, and mobile applications. Programmers can implement Flash software using an IDE such as Adobe Animate, Adobe Flash Builder, Adobe Director, FlashDevelop, and Powerflasher FDT. Adobe AIR enables full-featured desktop and mobile applications to be developed with Flash and published for Windows, macOS, Android, iOS, Xbox One, PlayStation 4, Wii U, and Nintendo Switch. Flash was initially used to create fully-interactive websites, but this approach was phased out with the introduction of HTML5. Instead, Flash found a niche as the dominant platform for online multimedia content, particularly for browser games. Following an open letter written by Steve Jobs in 2010 stating that he would not approve the use of Flash on Apple's iOS devices due to numerous security flaws, use of Flash declined as Adobe transitioned to the Adobe AIR platform. The Flash Player was deprecated in 2017 and officially discontinued at the end of 2020 for all users outside mainland China, as well as non-enterprise users, with many web browsers and operating systems scheduled to remove the Flash Player software around the same time. Adobe continues to develop Adobe Animate, which supports web standards such as HTML5 instead of the Flash format. Applications Websites In the early 2000s, Flash was widely installed on desktop computers, and was often used to display interactive web pages and online games, and to play video and audio content. In 2005, YouTube was founded by former PayPal employees, and it used Adobe Flash Player as a means to display compressed video content on the web. Between 2000 and 2010, numerous businesses used Flash-based websites to launch new products, or to create interactive company portals. Notable users include Nike, Hewlett-Packard (more commonly known as HP), Nokia, General Electric, World Wildlife Fund, HBO, Cartoon Network, Disney, and Motorola. After Adobe introduced hardware-accelerated 3D for Flash (Stage3D), Flash websites saw a growth of 3D content for product demonstrations and virtual tours. In 2007, YouTube offered videos in HTML5 format to support the iPhone and iPad, which did not support Flash Player. After a controversy with Apple, Adobe stopped developing Flash Player for Mobile, focusing its efforts on Adobe AIR applications and HTML5 animation. In 2015, Google introduced Google Swiffy, a tool that converted Flash animation to HTML5, which Google used to automatically convert Flash web ads for mobile devices. In 2016, Google discontinued Swiffy and its support. In 2015, YouTube switched to HTML5 technology on most devices by default; however, YouTube supported the Flash-based video player for older web browsers and devices until 2017. Rich Internet Applications After Flash 5 introduced ActionScript in 2000, developers combined the visual and programming capabilities of Flash to produce interactive experiences and applications for the Web. Such Web-based applications eventually became known as "Rich Internet Applications" and later "Rich Web Applications". In 2004, Macromedia Flex was released, and specifically targeted the application development market. Flex introduced new user interface components, advanced data visualization components, data remoting, and a modern IDE (Flash Builder). Flex competed with Asynchronous JavaScript and XML (AJAX) and Microsoft Silverlight during its tenure. Flex was upgraded to support integration with remote data sources, using AMF, BlazeDS, Adobe LiveCycle, Amazon Elastic Compute Cloud, and others. Between 2006 and 2016, the Speedtest.net web service conducted over 9.0 billion speed tests with a utility built with Adobe Flash. In 2016, the service shifted to HTML5 due to the decreasing availability of Adobe Flash Player on PCs. Developers could create rich internet applications and browser plugin-based applets in ActionScript 3.0 programming language with IDEs, including Adobe Flash Builder, FlashDevelop and Powerflasher FDT. Flex applications were typically built using Flex frameworks such as PureMVC. Video games Flash video games were popular on the Internet, with portals like Newgrounds, Kongregate, and Armor Games dedicated to hosting Flash-based games. Many Flash games were developed by individuals or groups of friends due to the simplicity of the software. Popular Flash games include Farmville, Alien Hominid, QWOP, Club Penguin, and Dofus. Adobe introduced various technologies to help build video games, including Adobe AIR (to release games for desktop or mobile platforms), Adobe Scout (to improve performance), CrossBridge (to convert C++-based games to run in Flash), and Stage3D (to support GPU-accelerated video games). 3D frameworks like Away3D and Flare3D simplified creation of 3D content for Flash. Adobe AIR allows the creation of Flash-based mobile games, which may be published to the Google Play and Apple app stores. Flash is also used to build interfaces and HUDs for 3D video games using Scaleform GFx, a technology that renders Flash content within non-Flash video games. Scaleform is supported by more than 10 major video game engines including Unreal Engine 3, CryEngine, and PhyreEngine, and has been used to provide 3D interfaces for more than 150 major video game titles since its launch in 2003. Film and animation Notable users of Flash include DHX Media Vancouver for productions including Pound Puppies, Littlest Pet Shop and My Little Pony: Friendship Is Magic, Fresh TV for Total Drama, Nelvana for 6teen and Clone High, Williams Street for Metalocalypse and Squidbillies, Nickelodeon Animation Studio for El Tigre: The Adventures of Manny Rivera, Starz Media for Wow! Wow! Wubbzy!, Ankama Animation for Wakfu: The Animated Series, among others. History FutureWave The precursor to Flash was SmartSketch, a product published by FutureWave Software in 1993. The company was founded by Charlie Jackson, Jonathan Gay, and Michelle Welsh. SmartSketch was a vector drawing application for pen computers running the PenPoint OS. When PenPoint failed in the marketplace, SmartSketch was ported to Microsoft Windows and Mac OS. As the Internet became more popular, FutureWave realized the potential for a vector-based web animation tool that might challenge Macromedia Shockwave technology. In 1995, FutureWave modified SmartSketch by adding frame-by-frame animation features and released this new product as FutureSplash Animator on Macintosh and PC. FutureWave approached Adobe Systems with an offer to sell them FutureSplash in 1995, but Adobe turned down the offer at that time. Microsoft wanted to create an "online TV network" (MSN 2.0) and adopted FutureSplash animated content as a central part of it. Disney Online used FutureSplash animations for their subscription-based service Disney's Daily Blast. Fox Broadcasting Company launched The Simpsons using FutureSplash. Macromedia In December 1996, FutureSplash was acquired by Macromedia, and Macromedia re-branded and released FutureSplash Animator as Macromedia Flash 1.0. Flash was a two-part system, a graphics and animation editor known as Macromedia Flash, and a player known as Macromedia Flash Player. FutureSplash Animator was an animation tool originally developed for pen-based computing devices. Due to the small size of the FutureSplash Viewer, it was particularly suited for download on the Web. Macromedia distributed Flash Player as a free browser plugin in order to quickly gain market share. By 2005, more computers worldwide had Flash Player installed than any other Web media format, including Java, QuickTime, RealNetworks, and Windows Media Player. Macromedia upgraded the Flash system between 1996 and 1999 adding MovieClips, Actions (the precursor to ActionScript), Alpha transparency, and other features. As Flash matured, Macromedia's focus shifted from marketing it as a graphics and media tool to promoting it as a Web application platform, adding scripting and data access capabilities to the player while attempting to retain its small footprint. In 2000, the first major version of ActionScript was developed, and released with Flash 5. Actionscript 2.0 was released with Flash MX 2004 and supported object-oriented programming, improved UI components and other programming features. The last version of Flash released by Macromedia was Flash 8, which focused on graphical upgrades such as filters (blur, drop shadow, etc.), blend modes (similar to Adobe Photoshop), and advanced features for FLV video. Adobe On December 3, 2005, Adobe Systems acquired Macromedia alongside its product line which included Flash, Dreamweaver, Director/Shockwave, Fireworks, and Authorware. In 2007, Adobe's first version release was Adobe Flash CS3 Professional, the ninth major version of Flash. It introduced the ActionScript 3.0 programming language, which supported modern programming practices and enabled business applications to be developed with Flash. Adobe Flex Builder (built on Eclipse) targeted the enterprise application development market, and was also released the same year. Flex Builder included the Flex SDK, a set of components that included charting, advanced UI, and data services (Flex Data Services). In 2008, Adobe released the tenth version of Flash, Adobe Flash CS4. Flash 10 improved animation capabilities within the Flash editor, adding a motion editor panel (similar to Adobe After Effects), inverse kinematics (bones), basic 3D object animation, object-based animation, and other text and graphics features. Flash Player 10 included an in-built 3D engine (without GPU acceleration) that allowed basic object transformations in 3D space (position, rotation, scaling). Also in 2008, Adobe released the first version of Adobe Integrated Runtime (later re-branded as Adobe AIR), a runtime engine that replaced Flash Player, and provided additional capabilities to the ActionScript 3.0 language to build desktop and mobile applications. With AIR, developers could access the file system (the user's files and folders), and connected devices such as a joystick, gamepad, and sensors for the first time. In 2011, Adobe Flash Player 11 was released, and with it the first version of Stage3D, allowing GPU-accelerated 3D rendering for Flash applications and games on desktop platforms such as Microsoft Windows and Mac OS X. Adobe further improved 3D capabilities from 2011 to 2013, adding support for 3D rendering on Android and iOS platforms, alpha-channels, compressed textures, texture atlases, and other features. Adobe AIR was upgraded to support 64-bit computers, and to allow developers to add additional functionality to the AIR runtime using AIR Native Extensions (ANE). In May 2014, Adobe announced that Adobe AIR was used in over 100,000 unique applications and had over 1 billion installations logged worldwide. Adobe AIR was voted the Best Mobile Application Development product at the Consumer Electronics Show on two consecutive years (CES 2014 and CES 2015). In 2016, Adobe renamed Flash Professional, the primary authoring software for Flash content, to Adobe Animate to reflect its growing use for authoring HTML5 content in favor of Flash content. Open Source Adobe has taken steps to reduce or eliminate Flash licensing costs. For instance, the SWF file format documentation is provided free of charge after they relaxed the requirement of accepting a non-disclosure agreement to view it in 2008. Adobe also created the Open Screen Project which removes licensing fees and opens data protocols for Flash. Adobe has also open-sourced many components relating to Flash. In 2006, the ActionScript Virtual Machine 2 (AVM2) which implements ActionScript 3 was donated as open-source to Mozilla Foundation, to begin work on the Tamarin virtual machine that would implement the ECMAScript 4 language standard with the help of the Mozilla community. It was released under the terms of a MPL/GPL/LGPL tri-license and includes the specification for the ActionScript bytecode format; Tamarin Project jointly managed by Mozilla and Adobe Systems It is now considered obsolete by Mozilla. In 2011, the Adobe Flex Framework was donated as open-source to the Apache Software Foundation and rebranded as Apache Flex. Some saw this move as Adobe abandoning Flex, and stepping away from the Flash Platform as a whole. Sources from Apache say that "Enterprise application development is no longer a focus at Adobe. At least as Flash is concerned, Adobe is concentrating on games and video.", and they conclude that "Flex Innovation is Exploding!". The donated source code included a partly developed AS3 compiler (dubbed "Falcon") and the BlazeDS set of technologies. In 2013, the CrossBridge C++ cross-compilation toolset was open sourced by Adobe and released on GitHub. The project was formerly termed "Alchemy" and "Flash Runtime C++ Compiler", and targeted the game development market to enable C++ video games to run in Adobe Flash Player. Adobe has not been willing to make complete source code of the Flash Player available for free software development and even though free and open source alternatives such as Shumway and Gnash have been built, they are no longer under active development. Open Screen Project On May 1, 2008, Adobe announced the Open Screen Project, with the intent of providing a consistent application interface across devices such as personal computers, mobile devices, and consumer electronics. When the project was announced, seven goals were outlined: the abolition of licensing fees for Adobe Flash Player and Adobe AIR, the removal of restrictions on the use of the Shockwave Flash (SWF) and Flash Video (FLV) file formats, the publishing of application programming interfaces for porting Flash to new devices, and the publishing of The Flash Cast protocol and Action Message Format (AMF), which let Flash applications receive information from remote databases. , the specifications removing the restrictions on the use of SWF and FLV/F4V specs have been published. The Flash Cast protocol—now known as the Mobile Content Delivery Protocol—and AMF protocols have also been made available, with AMF available as an open source implementation, BlazeDS. The list of mobile device providers who have joined the project includes Palm, Motorola, and Nokia, who, together with Adobe, have announced a $10 million Open Screen Project fund. End of life One of Flash's primary uses on the Internet when it was first released was for building fully immersive, interactive websites. These were typically highly creative site designs that provided more flexibility over what the current HTML standards could provide as well as operate over dial-up connections. However, these sites limited accessibility by "breaking the Back Button", dumping visitors out of the Flash experience entirely by returning them to whatever page they had been on prior to first arriving at the site. Fully Flash-run sites fell out of favor for more strategic use of Flash plugins for video and other interactive features among standard HTML conventions, corresponding with the availability of HTML features like cascading style-sheets in the mid-00's. At the same time, this also led to Flash being used for new apps, including video games and animations. Precursors to YouTube featuring user-generated Flash animations and games such as Newgrounds became popular destinations, further helping to spread the use of Flash. Toward the end of the millennium, the Wireless Application Protocol (WAP) was released, corresponding with development of Dynamic HTML. Fifteen years later, WAP had largely been replaced by full-capability implementations and the HTML5 standard included more support for interactive and video elements. Support for Flash in these mobile browsers was not included. In 2010, Apple's Steve Jobs famously wrote Thoughts on Flash, an open letter to Adobe criticizing the closed nature of the Flash platform and the inherent security problems with the application to explain why Flash was not supported on iOS. Adobe created the Adobe AIR environment as a means to appease Apple's concerns, and spent time legally fighting Apple over terms of its App Store to allow AIR to be used on the iOS. While Adobe eventually won, allowing for other third-party development environments to get access to the iOS, Apple's decision to block Flash itself was considered the "death blow" to the Flash application. In November 2011, about a year after Jobs' open letter, Adobe announced it would no longer be developing Flash and advised developers to switch to HTML5. In 2011, Adobe ended support for Flash on Android. Adobe stated that Flash platform was transitioning to Adobe AIR and OpenFL, a multi-target open-source implementation of the Flash API. In 2015, Adobe rebranded Flash Professional, the main Flash authoring environment, as Adobe Animate to emphasize its expanded support for HTML5 authoring, and stated that it would "encourage content creators to build with new web standards" rather than use Flash. In July 2017, Adobe deprecated Flash, and announced its End-Of-Life (EOL) at the end of 2020, and will cease support, distribution, and security updates for Flash Player. With Flash's EOL announced, many browsers took steps to gradually restrict Flash content (caution users before launching it, eventually blocking all content without an option to play it). By January 2021, all major browsers were blocking all Flash content unconditionally. Only IE11, niche browser forks, and some browsers built for China plan to continue support. Furthermore, excluding the China variant of Flash, Flash execution software has a built-in kill switch which prevents it from playing Flash after January 12, 2021. In January 2021, Microsoft released an optional update KB4577586 which removes Flash Player from Windows; in July 2021 this update was pushed out as a security update and applied automatically to all remaining systems. Post EOL support Adobe Flash will still be supported in China and worldwide on some specialized enterprise platforms beyond 2020. Content preservation projects As early as 2014, around the same time that Adobe began encouraging Flash developers to transition their works to HTML5 standards, others began efforts to preserve existing Flash content through emulation of Flash in open standards. While some Flash applications were utilitarian, several applications had been shown to be experimental art, while others had laid the foundation of independent video game development. An early project was Mozilla's Shumway, an open source project that attempted to emulate the Flash standard in HTML5, but the project was shuttered as the team found that more developers were switching to HTML5 than seeking to keep their content in Flash, coupled with the difficulties in assuring full compatibility. Google had developed the Swiffy application, released in 2014, to convert Flash applications to HTML5-compatible scripts for viewing on mobile devices, but it was shut down in 2016. Closer to Flash's EOL date in 2020, there were more concentrated efforts simply to preserve existing Flash applications, including websites, video games, and animations beyond Flash's EOL. In November 2020, the Internet Archive integrated Ruffle within its Emularity system to emulate Flash games and animations without the security holes, opening a new collection for creators and users to save and preserve Flash content. By October 2023, the Flashpoint Archive has collected more than 160,000 Flash applications, excluding those that were commercial products, and offered as a freely available archive for users to download. Kongregate, one of the larger sites that offered Flash games, has been working with the Strong Museum of Play to preserve its games. Format FLA Flash source files are in the FLA format and contain graphics and animation, as well as embedded assets such as bitmap images, audio files, and FLV video files. The Flash source file format was a proprietary format and Adobe Animate and Adobe Flash Pro were the only available authoring tools capable of editing such files. Flash source files (.fla) may be compiled into Flash movie files (.swf) using Adobe Animate. Note that FLA files can be edited, but output (.swf) files cannot. SWF Flash movie files were in the SWF format, traditionally called "ShockWave Flash" movies, "Flash movies", or "Flash applications", usually have a .swf file extension, and may be used in the form of a web page plug-in, strictly "played" in a standalone Flash Player, or incorporated into a self-executing Projector movie (with the .exe extension in Microsoft Windows). Flash Video files have a .flv file extension and are either used from within .swf files or played through a flv-aware player, such as VLC, or QuickTime and Windows Media Player with external codecs added. The use of vector graphics combined with program code allows Flash files to be smaller—and thus allows streams to use less bandwidth—than the corresponding bitmaps or video clips. For content in a single format (such as just text, video, or audio), other alternatives may provide better performance and consume less CPU power than the corresponding Flash movie, for example, when using transparency or making large screen updates such as photographic or text fades. In addition to a vector-rendering engine, the Flash Player includes a virtual machine called the ActionScript Virtual Machine (AVM) for scripting interactivity at run-time, with video, MP3-based audio, and bitmap graphics. As of Flash Player 8, it offered two video codecs: On2 Technologies VP6 and Sorenson Spark, and run-time JPEG, Progressive JPEG, PNG, GIF and (DWG) AutoCAD Drawing file (WMV) Windows Metafile capability. 3D Flash Player 11 introduced a full 3D shader API, called Stage3D, which is fairly similar to WebGL. Stage3D enables GPU-accelerated rendering of 3D graphics within Flash games and applications, and has been used to build Angry Birds, and a couple of other notable games. Various 3D frameworks have been built for Flash using Stage3D, such as Away3D 4, CopperCube, Flare3D, and Starling. Professional game engines like Unreal Engine and Unity also export Flash versions which use Stage3D to render 3D graphics. Flash Video Virtually all browser plugins for video are free of charge and cross-platform, including Adobe's offering of Flash Video, which was introduced with Flash version 6. Flash Video had been a popular choice for websites due to the large installed user base and programmability of Flash. In 2010, Apple publicly criticized Adobe Flash, including its implementation of video playback for not taking advantage of hardware acceleration, one reason Flash was not to be found on Apple's mobile devices. Soon after Apple's criticism, Adobe demoed and released a beta version of Flash 10.1, which used available GPU hardware acceleration even on a Mac. Flash 10.2 beta, released December 2010, added hardware acceleration for the whole video rendering pipeline. Flash Player supports two distinct modes of video playback, and hardware accelerated video decoding may not be used for older video content. Such content causes excessive CPU usage compared to comparable content played with other players. Software Rendered Video Flash Player supports software rendered video since version 6. Such video supports vector animations displayed above the video content. This obligation may, depending on graphic APIs exposed by the operating system, prohibit using a video overlay, like a traditional multimedia player would use, with the consequence that color space conversion and scaling must happen in software. Hardware Accelerated Video Flash Player supports hardware accelerated video playback since version 10.2, for H.264, F4V, and FLV video formats. Such video is displayed above all Flash content and takes advantage of video codec chipsets installed on the user's device. Developers must specifically use the "StageVideo" technology within Flash Player in order for hardware decoding to be enabled. Flash Player internally uses technologies such as DirectX Video Acceleration and OpenGL to do so. In tests done by Ars Technica in 2008 and 2009, Adobe Flash Player performed better on Windows than Mac OS X and Linux with the same hardware. Performance has later improved for the latter two, on Mac OS X with Flash Player 10.1, and on Linux with Flash Player 11. Flash Audio Flash Audio is most commonly encoded in MP3; however, it can also use ADPCM (an IMA ADPCM variation that can use 2, 3, 4, or 5 bits per sample), Nellymoser (Nellymoser Asao Codec) and Speex audio codecs. Flash allows sample rates of 5512, 11025, 22050 and 44100 Hz (but Speex uses 16 kHz and Nellymoser Asao can also use 8 kHz and 16 kHz). It cannot have a 48 kHz audio sample rate, which is the standard TV and DVD sample rate. On August 20, 2007, Adobe announced on its blog that with Update 3 of Flash Player 9, Flash Video will also implement some parts of the MPEG-4 international standards. Specifically, Flash Player will work with video compressed in H.264 (MPEG-4 Part 10), audio compressed using AAC (MPEG-4 Part 3), the F4V, MP4 (MPEG-4 Part 14), M4V, M4A, 3GP, and MOV multimedia container formats, 3GPP Timed Text specification (MPEG-4 Part 17), which is a standardized subtitle format and partial parsing capability for the "ilst" atom, which is the ID3 equivalent iTunes uses to store metadata. MPEG-4 Part 2 and H.263 will not work in F4V file format. Adobe also announced that it will be gradually moving away from the FLV format to the standard ISO base media file format (MPEG-4 Part 12) owing to functional limits with the FLV structure when streaming H.264. The final release of the Flash Player implementing some parts of MPEG-4 standards had become available in Fall 2007. Adobe Flash Player 10.1 does not have acoustic echo cancellation, unlike the VoIP offerings of Skype and Google Voice, making this and earlier versions of Flash less suitable for group calling or meetings. Flash Player 10.3 Beta incorporates acoustic echo cancellation. ActionScript Flash programs use ActionScript programming language. It is an enhanced superset of the ECMAScript programming language, with a classical Java-style class model, rather than JavaScript's prototype model. Specifications In October 1998, Macromedia disclosed the Flash Version 3 Specification on its website. It did this in response to many new and often semi-open formats competing with SWF, such as Xara's Flare and Sharp's Extended Vector Animation formats. Several developers quickly created a C library for producing SWF. In February 1999, MorphInk 99 was introduced, the first third-party program to create SWF files. Macromedia also hired Middlesoft to create a freely available developers' kit for the SWF file format versions 3 to 5. Macromedia made the Flash Files specifications for versions 6 and later available only under a non-disclosure agreement, but they are widely available from various sites. In April 2006, the Flash SWF file format specification was released with details on the then newest version format (Flash 8). Although still lacking specific information on the incorporated video compression formats (On2, Sorenson Spark, etc.), this new documentation covered all the new features offered in Flash v8 including new ActionScript commands, expressive filter controls, and so on. The file format specification document is offered only to developers who agree to a license agreement that permits them to use the specifications only to develop programs that can export to the Flash file format. The license does not allow the use of the specifications to create programs that can be used for playback of Flash files. The Flash 9 specification was made available under similar restrictions. In June 2009, Adobe launched the Open Screen Project (Adobe link), which made the SWF specification available without restrictions. Previously, developers could not use the specification for making SWF-compatible players, but only for making SWF-exporting authoring software. The specification still omits information on codecs such as Sorenson Spark, however. Animation tools Official tools The Adobe Animate authoring program is primarily used to design graphics and animation and publish the same for websites, web applications, and video games. The program also offers limited support for audio and video embedding and ActionScript scripting. Adobe released Adobe LiveMotion, designed to create interactive animation content and export it to a variety of formats, including SWF. LiveMotion failed to gain any notable user base. In February 2003, Macromedia purchased Presedia, which had developed a Flash authoring tool that automatically converted PowerPoint files into Flash. Macromedia subsequently released the new product as Breeze, which included many new enhancements. Third-party tools Various free and commercial software packages can output animations into the Flash SWF format including: Ajax Animator aims to create a Flash development environment Apple Keynote allows users to export presentations to Flash SWF animations KToon can edit vectors and generate SWF, but its interface is very different from Macromedia's Moho is a 2D animation software package specialized for character animation, that creates Flash animations OpenOffice Impress Screencast and Screencam, produces demos or tutorials by capturing the screen and generating a Flash animation of the same SWiSH Max is an animation editor with preset animation, developed by an ex-employee of Macromedia, that can output Flash animations Synfig Toon Boom is a traditional animation tool that can output Flash animations Swift 3d for vector 3D rendering & animation Xara Photo & Graphic Designer can output Flash animations The Flash 4 Linux project was an initiative to develop an open source Linux application as an alternative to Adobe Animate. Development plans included authoring capacity for 2D animation, and tweening, as well as outputting SWF file formats. F4L evolved into an editor that was capable of authoring 2D animation and publishing of SWF files. Flash 4 Linux was renamed UIRA. UIRA intended to combine the resources and knowledge of the F4L project and the Qflash project, both of which were Open Source applications that aimed to provide an alternative to the proprietary Adobe Flash. Programming tools Official tools Adobe provides a series of tools to develop software applications and video games for Flash: Apache Flex SDK – a free, open source SDK to compile Flash-based rich internet applications from source code. The Apache Flex ActionScript 3.0 compiler generates SWF files from ActionScript 3 files. Flex was the primary ActionScript 3 compiler and was actively developed by Adobe before it was donated to Apache Software Foundation in 2011. Adobe Animate – primarily used to design graphics and animation, but supports ActionScript scripting and debugging. Adobe Flash Builder – enterprise application development & debugging, contains the Flex SDK with UI and charting components. Adobe Scout – a visual profiler to optimize the performance of Flash content. CrossBridge – a free SDK to cross-compile C++ code to run in Flash Player. Third-party tools Third-party development tools have been created to assist developers in creating software applications and video games with Flash. FlashDevelop is a free and open source Flash ActionScript IDE, which includes a project manager and debugger for building applications on Flash Player and Adobe AIR. Powerflasher FDT is a commercial ActionScript IDE similar to FlashDevelop. Haxe is an open source, high-level object-oriented programming language geared towards web-content creation that can compile SWF files from Haxe programs. As of 2012, Haxe can build programs for Flash Player that perform faster than the same application built with the Adobe Flex SDK compiler, due to additional compiler optimizations supported in Haxe. SWFTools (specifically, swfc) is an open-source ActionScript 3.0 compiler which generates SWF files from script files, which includes SVG tags. swfmill and MTASC also provide tools to create SWF files by compiling text, ActionScript or XML files into Flash animations Ming library, to create SWF files programmatically, has interfaces for C, PHP, C++, Perl, Python, and Ruby. It is able to import and export graphics from XML into SWF. Players Proprietary Adobe Flash Player is the multimedia and application player originally developed by Macromedia and acquired by Adobe Systems. It plays SWF files, which can be created by Adobe Animate, Apache Flex, or a number of other Adobe Systems and 3rd party tools. It has support for a scripting language called ActionScript, which can be used to display Flash Video from an SWF file. Scaleform GFx is a commercial alternative Flash player that features fully hardware-accelerated 2D graphics rendering using the GPU. Scaleform has high conformance with both Flash 10 ActionScript 3 and Flash 8 ActionScript 2. Scaleform GFx is a game development middleware solution that helps create graphical user interfaces or HUDs within 3D video games. It does not work with web browsers. IrfanView, an image viewer, uses Flash Player to display SWF files. Open source OpenFL, a cross-platform open-source implementation of the Adobe Flash API, supports importing SWF assets. Lightspark is a free and open-source SWF player that supports most of ActionScript 3.0 and has a Mozilla-compatible plug-in. It will fall back on Gnash, a free SWF player supporting ActionScript 1.0 and 2.0 (AVM1) code. Lightspark supports OpenGL-based rendering for 3D content. The player is also compatible with H.264 Flash videos on YouTube. Gnash aimed to create a software player and browser plugin replacement for the Adobe Flash Player. Gnash can play SWF files up to version 7, and 80% of ActionScript 2.0. Gnash runs on Windows, Linux and other platforms for the 32-bit, 64-bit, and other operating systems, but development has slowed significantly in recent years. Shumway was an open source Flash Player released by Mozilla in November 2012. It was built in JavaScript and is thus compatible with modern web browsers. In early October 2013, Shumway was included by default in the Firefox nightly branch. Shumway rendered Flash contents by translating contents inside Flash files to HTML5 elements, and running an ActionScript interpreter in JavaScript. It supported both AVM1 and AVM2, and ActionScript versions 1, 2, and 3. Development of Shumway ceased in early 2016. In the same year that Shumway was abandoned, work began on Ruffle, a flash emulator written in Rust. It also runs in web browsers, by compiling down to WebAssembly and using HTML5 Canvas. In 2020, the Internet Archive added support for emulating SWF by adding Ruffle to its emulation scheme. As of March 2023, Ruffle states that it supports 95% of the AS1/2 language and 73% of the AS1/2 APIs, but does not correctly run most AS3 (AVM2) applications. Availability Desktop computers Adobe Flash Player Adobe Flash Player is currently only supported with the enterprise and China variants, it has been deprecated everywhere else. Adobe Flash Player is available in four flavors: ActiveX-based Plug-in NPAPI-based Plug-in PPAPI-based Plug-in Projector The ActiveX version is an ActiveX control for use in Internet Explorer and any other Windows applications that support ActiveX technology. The Plug-in versions are available for browsers supporting either NPAPI or PPAPI plug-ins on Microsoft Windows, macOS, and Linux. The projector version is a standalone player that can open SWF files directly. Adobe AIR Adobe AIR shares some code with Adobe Flash Player and essentially embeds it. Mobile devices Adobe Flash Player Adobe Flash Player was previously available for a variety of mobile operating systems, including Android (between versions 2.2 and 4.0.4)., Pocket PC/Windows CE, QNX (e.g., on BlackBerry PlayBook), Symbian, Palm OS, and webOS (since version 2.0). Flash Player for smartphones was originally made available to handset manufacturers at the end of 2009. In November 2011, Adobe announced the withdrawal of support for Flash Player on mobile devices. In 2011 Adobe reaffirmed its commitment to "aggressively contribute" to HTML5. Adobe announced the end of Flash for mobile platforms or TV, instead focusing on HTML5 for browser content and Adobe AIR for the various mobile application stores and described it as "the beginning of the end". BlackBerry LTD (formerly known as RIM) announced that it would continue to develop Flash Player for the PlayBook. There is no Adobe Flash Player for iOS devices (iPhone, iPad, and iPod Touch). However, Flash content can be made to run on iOS devices in a variety of ways: Flash content can be bundled inside an Adobe AIR app, which will then run on iOS devices. (Apple did not allow this for a while, but they relaxed those restrictions in September 2010.) If the content is Flash video being served by Adobe Flash Media Server 4.5, the server will translate and send the video as HTTP Dynamic Streaming or HTTP Live Streaming, both of which can be played by iOS devices. Some specialized mobile browsers manage to accommodate Flash via streaming content from the cloud directly to a user's device. Some examples are Photon Browser and Puffin Web Browser. The mobile version of Internet Explorer for Windows Phone cannot play Flash content; however, Flash support is still present on the tablet version of Windows. Adobe AIR AIR is a cross-platform runtime system for developing applications for mobile devices running Android (ARM Cortex-A8 and above) and Apple iOS. Adobe Flash Lite Adobe Flash Lite is a lightweight version of Adobe Flash Player intended for mobile phones and other portable electronic devices like Chumby and iRiver. on the web For a list of non-web alternative players, see . OpenFL OpenFL is an open-source software framework that mirrors the Adobe Flash API. It allows developers to build a single application against the OpenFL APIs, and simultaneously target multiple platforms including iOS, Android, HTML5 (choice of Canvas, WebGL, SVG or DOM), Windows, macOS, Linux, WebAssembly, Flash, AIR, PlayStation 4, PlayStation 3, PlayStation Vita, Xbox One, Wii U, TiVo, Raspberry Pi, and Node.js. OpenFL mirrors the Flash API for graphical operations. OpenFL applications can be written in Haxe, JavaScript (EcmaScript 5 or 6+), or TypeScript. More than 500 video games have been developed with OpenFL, including the BAFTA-award-winning game Papers, Please, Rymdkapsel, Lightbot, and Madden NFL Mobile. HTML5 HTML5 is often cited as an alternative to Adobe Flash technology usage on web pages. Adobe released a tool that converts Flash to HTML5, and in June 2011, Google released an experimental tool that does the same. In January 2015, YouTube defaulted to HTML5 players to better support more devices. Flash to HTML5 The following tools allow converting Flash content to HTML5: Adobe Edge Animate was designed to produce HTML5 animations directly. Adobe Animate now allows Flash animations to be published into HTML5 content directly. Google Swiffy was a web-based tool developed by Google that converts SWF files into HTML5, using SVG for graphics and JavaScript for animation. Adobe Wallaby was a converter developed by Adobe. CreateJS is a library that while available separately was also adopted by Adobe as a replacement for Wallaby in CS6. Unlike Wallaby, which was a standalone program, the "Toolkit for CreateJS" only works as a plug-in inside Flash Professional; it generates output for the HTML5 canvas, animated with JavaScript. Around December 2013, the toolkit was integrated directly into Flash Professional CC. The following tools run Flash content in an HTML5-enabled browser, but do not convert to a HTML5 webpage: Shumway, developed by Mozilla, was an open source Flash virtual machine written in JavaScript. Web Flash Player, developed by GraphOGL Risorse, is a free and on-line Flash Player (Flash virtual machine) written in JavaScript. Criticisms Mobile support Websites built with Adobe Flash will not function on most modern mobile devices running Google Android or iOS (iPhone, iPad). The only alternative is using HTML5 and responsive web design to build websites that support both desktop and mobile devices. However, Flash is still used to build mobile games using Adobe AIR. Such games will not work in mobile web browsers but must be installed via the appropriate app store. Vendor lock-in The reliance on Adobe for decoding Flash made its use on the World Wide Web a concern—the completeness of its public specifications are debated, and no complete implementation of Flash is publicly available in source code form with a license that permits reuse. Generally, public specifications are what makes a format re-implementable (see future proofing data storage), and reusable codebases can be ported to new platforms without the endorsement of the format creator. Adobe's restrictions on the use of the SWF/FLV specifications were lifted in February 2009 (see Adobe's Open Screen Project). However, despite efforts of projects like Gnash, Swfdec, and Lightspark, a complete free Flash player is yet to be seen, as of September 2011. For example, Gnash cannot use SWF v10 yet. Notably, Gnash was listed on the Free Software Foundation's high priority list, from at least 2007, to its removal in January 2017. Notable advocates of free software, open standards, and the World Wide Web have warned against the use of Flash: The founder of Mozilla Europe, Tristan Nitot, stated in 2008: Companies building websites should beware of proprietary rich-media technologies like Adobe's Flash and Microsoft's Silverlight. (...) You're producing content for your users and there's someone in the middle deciding whether users should see your content. Representing open standards, inventor of CSS and co-author of HTML5, Håkon Wium Lie explained in a Google tech talk of 2007, entitled "the <video> element", the proposal of Theora as the format for HTML video: I believe very strongly, that we need to agree on some kind of baseline video format if [the video element] is going to succeed. Flash is today the baseline format on the web. The problem with Flash is that it's not an open standard. Representing the free software movement, Richard Stallman stated in a speech in 2004 that: "The use of Flash in websites is a major problem for our community." Accessibility and usability Usability consultant Jakob Nielsen published an Alertbox in 2000 entitled, Flash: 99% Bad, stating that "Flash tends to degrade websites for three reasons: it encourages design abuse, it breaks with the Web's fundamental interaction principles, and it distracts attention from the site's core value." Some problems have been at least partially fixed since Nielsen's complaints: text size can be controlled using full page zoom and it has been possible for authors to include alternative text in Flash since Flash Player 6. Flash blocking in web browsers Flash content is usually embedded using the object or embed HTML element. A web browser that does not fully implement one of these elements displays the replacement text, if supplied by the web page. Often, a plugin is required for the browser to fully implement these elements, though some users cannot or will not install it. Since Flash can be used to produce content (such as advertisements) that some users find obnoxious or take a large amount of bandwidth to download, some web browsers, by default, do not play Flash content until the user clicks on it, e.g. Konqueror, K-Meleon. Most current browsers have a feature to block plugins, playing one only when the user clicks it. Opera versions since 10.5 feature native Flash blocking. Opera Turbo requires the user to click to play Flash content, and the browser also allows the user to enable this option permanently. Both Chrome and Firefox have an option to enable "click to play plugins". Equivalent "Flash blocker" extensions are also available for many popular browsers: Firefox has Flashblock and NoScript, Internet Explorer has Foxie, which contains a number of features, one of them named Flashblock. WebKit-based browsers under macOS, such as Apple's Safari, have ClickToFlash. In June 2015, Google announced that Chrome will "pause" advertisements and "non-central" Flash content by default. Firefox (from version 46) rewrites old Flash-only YouTube embed code into YouTube's modern embedded player that is capable of using either HTML video or Flash. Such embed code is used by non-YouTube sites to embed YouTube's videos, and can still be encountered, for example, on old blogs and forums. However, there are ways to pass this error in absence of flash player by deleting the validation code in HTML. This also depends on browser vision. Security For many years Adobe Flash Player's security record has led many security experts to recommend against installing the player, or to block Flash content. The US-CERT has recommended blocking Flash, and security researcher Charlie Miller recommended "not to install Flash"; however, for people still using Flash, Intego recommended that users get trusted updates "only directly from the vendor that publishes them." Adobe Flash Player has over 1078 CVE entries, of which over 842 lead to arbitrary code execution, and past vulnerabilities have enabled spying via web cameras. Security experts have long predicted the demise of Flash, saying that with the rise of HTML5 "...the need for browser plugins such as Flash is diminishing". Active moves by third parties to limit the risk began with Steve Jobs in 2010 saying that Apple would not allow Flash on the iPhone, iPod Touch, and iPad – citing abysmal security as one reason. Flash often used the ability to dynamically change parts of the runtime on languages on OSX to improve their own performance, but caused general instability. In July 2015, a series of newly discovered vulnerabilities resulted in Facebook's chief security officer, Alex Stamos, issuing a call to Adobe to discontinue the software entirely and the Mozilla Firefox web browser, Google Chrome, and Apple Safari to blacklist all earlier versions of Flash Player. Flash cookies Like the HTTP cookie, a flash cookie (also known as a "Local Shared Object") can be used to save application data. Flash cookies are not shared across domains. An August 2009 study by the Ashkan Soltani and a team of researchers at UC Berkeley found that 50% of websites using Flash were also employing flash cookies, yet privacy policies rarely disclosed them, and user controls for privacy preferences were lacking. Most browsers' cache and history suppress or delete functions did not affect Flash Player's writing Local Shared Objects to its own cache in version 10.2 and earlier, at which point the user community was much less aware of the existence and function of Flash cookies than HTTP cookies. Thus, users with those versions, having deleted HTTP cookies and purged browser history files and caches, may believe that they have purged all tracking data from their computers when in fact Flash browsing history remains. Adobe's own Flash Website Storage Settings panel, a submenu of the Settings Manager web application, and other editors and toolkits can manage settings for and delete Flash Local Shared Objects. Notable people The Brothers Chaps, creators of one of the most popular applications of Flash, the Homestar Runner cartoon series Colin Moock, an Adobe Flash and ActionScript expert, author, tutor, and programmer
Technology
Multimedia_2
null
20961
https://en.wikipedia.org/wiki/M%C3%B6bius%20function
Möbius function
The Möbius function is a multiplicative function in number theory introduced by the German mathematician August Ferdinand Möbius (also transliterated Moebius) in 1832. It is ubiquitous in elementary and analytic number theory and most often appears as part of its namesake the Möbius inversion formula. Following work of Gian-Carlo Rota in the 1960s, generalizations of the Möbius function were introduced into combinatorics, and are similarly denoted . Definition The Möbius function is defined by The Möbius function can alternatively be represented as where is the Kronecker delta, is the Liouville function, is the number of distinct prime divisors of , and is the number of prime factors of , counted with multiplicity. Another characterization by Gauss is the sum of all primitive roots. Values The values of for the first 50 positive numbers are The first 50 values of the function are plotted below: Larger values can be checked in: Wolframalpha the b-file of OEIS Applications Mathematical series The Dirichlet series that generates the Möbius function is the (multiplicative) inverse of the Riemann zeta function; if is a complex number with real part larger than 1 we have This may be seen from its Euler product Also: where is Euler's constant. The Lambert series for the Möbius function is which converges for . For prime , we also have Algebraic number theory Gauss proved that for a prime number the sum of its primitive roots is congruent to . If denotes the finite field of order (where is necessarily a prime power), then the number of monic irreducible polynomials of degree over is given by The Möbius function is used in the Möbius inversion formula. Physics The Möbius function also arises in the primon gas or free Riemann gas model of supersymmetry. In this theory, the fundamental particles or "primons" have energies . Under second quantization, multiparticle excitations are considered; these are given by for any natural number . This follows from the fact that the factorization of the natural numbers into primes is unique. In the free Riemann gas, any natural number can occur, if the primons are taken as bosons. If they are taken as fermions, then the Pauli exclusion principle excludes squares. The operator that distinguishes fermions and bosons is then none other than the Möbius function . The free Riemann gas has a number of other interesting connections to number theory, including the fact that the partition function is the Riemann zeta function. This idea underlies Alain Connes's attempted proof of the Riemann hypothesis. Properties The Möbius function is multiplicative (i.e., whenever and are coprime). Proof: Given two coprime numbers , we induct on . If , then . Otherwise, , so The sum of the Möbius function over all positive divisors of (including itself and 1) is zero except when : The equality above leads to the important Möbius inversion formula and is the main reason why is of relevance in the theory of multiplicative and arithmetic functions. Other applications of in combinatorics are connected with the use of the Pólya enumeration theorem in combinatorial groups and combinatorial enumerations. There is a formula for calculating the Möbius function without directly knowing the factorization of its argument: i.e. is the sum of the primitive -th roots of unity. (However, the computational complexity of this definition is at least the same as that of the Euler product definition.) Other identities satisfied by the Möbius function include and . The first of these is a classical result while the second was published in 2020. Similar identities hold for the Mertens function. Proof of the formula for the sum of over divisors The formula can be written using Dirichlet convolution as: where is the identity under the convolution. One way of proving this formula is by noting that the Dirichlet convolution of two multiplicative functions is again multiplicative. Thus it suffices to prove the formula for powers of primes. Indeed, for any prime and for any , while for . Other proofs Another way of proving this formula is by using the identity The formula above is then a consequence of the fact that the th roots of unity sum to 0, since each th root of unity is a primitive th root of unity for exactly one divisor of . However it is also possible to prove this identity from first principles. First note that it is trivially true when . Suppose then that . Then there is a bijection between the factors of for which and the subsets of the set of all prime factors of . The asserted result follows from the fact that every non-empty finite set has an equal number of odd- and even-cardinality subsets. This last fact can be shown easily by induction on the cardinality of a non-empty finite set . First, if , there is exactly one odd-cardinality subset of , namely itself, and exactly one even-cardinality subset, namely . Next, if , then divide the subsets of into two subclasses depending on whether they contain or not some fixed element in . There is an obvious bijection between these two subclasses, pairing those subsets that have the same complement relative to the subset . Also, one of these two subclasses consists of all the subsets of the set , and therefore, by the induction hypothesis, has an equal number of odd- and even-cardinality subsets. These subsets in turn correspond bijectively to the even- and odd-cardinality -containing subsets of . The inductive step follows directly from these two bijections. A related result is that the binomial coefficients exhibit alternating entries of odd and even power which sum symmetrically. Average order The mean value (in the sense of average orders) of the Möbius function is zero. This statement is, in fact, equivalent to the prime number theorem. sections if and only if is divisible by the square of a prime. The first numbers with this property are 4, 8, 9, 12, 16, 18, 20, 24, 25, 27, 28, 32, 36, 40, 44, 45, 48, 49, 50, 52, 54, 56, 60, 63, ... . If is prime, then , but the converse is not true. The first non prime for which is . The first such numbers with three distinct prime factors (sphenic numbers) are 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, 170, 174, 182, 186, 190, 195, 222, ... . and the first such numbers with 5 distinct prime factors are 2310, 2730, 3570, 3990, 4290, 4830, 5610, 6006, 6090, 6270, 6510, 6630, 7410, 7590, 7770, 7854, 8610, 8778, 8970, 9030, 9282, 9570, 9690, ... . Mertens function In number theory another arithmetic function closely related to the Möbius function is the Mertens function, defined by for every natural number . This function is closely linked with the positions of zeroes of the Riemann zeta function. See the article on the Mertens conjecture for more information about the connection between and the Riemann hypothesis. From the formula it follows that the Mertens function is given by where is the Farey sequence of order . This formula is used in the proof of the Franel–Landau theorem. Generalizations Incidence algebras In combinatorics, every locally finite partially ordered set (poset) is assigned an incidence algebra. One distinguished member of this algebra is that poset's "Möbius function". The classical Möbius function treated in this article is essentially equal to the Möbius function of the set of all positive integers partially ordered by divisibility. See the article on incidence algebras for the precise definition and several examples of these general Möbius functions. Popovici's function Constantin Popovici defined a generalised Möbius function to be the -fold Dirichlet convolution of the Möbius function with itself. It is thus again a multiplicative function with where the binomial coefficient is taken to be zero if . The definition may be extended to complex by reading the binomial as a polynomial in . Implementations Mathematica Maxima geeksforgeeks C++, Python3, Java, C#, PHP, JavaScript Rosetta Code Sage
Mathematics
Specific functions
null
20962
https://en.wikipedia.org/wiki/Methadone
Methadone
Methadone, sold under the brand names Dolophine and Methadose among others, is a synthetic opioid used medically to treat chronic pain and opioid use disorder. Prescribed for daily use, the medicine relieves cravings and opioid withdrawal symptoms. Withdrawal management using methadone can be accomplished in less than a month, or it may be done gradually over a longer period of time, or simply maintained for the rest of the patient's life. While a single dose has a rapid effect, maximum effect can take up to five days of use. After long-term use, in people with normal liver function, effects last 8 to 36 hours. Methadone is usually taken by mouth and rarely by injection into a muscle or vein. Side effects are similar to those of other opioids. These frequently include dizziness, sleepiness, nausea, vomiting, and sweating. Serious risks include opioid abuse and respiratory depression. Abnormal heart rhythms may also occur due to a prolonged QT interval. The number of deaths in the United States involving methadone poisoning declined from 4,418 in 2011 to 3,300 in 2015. Risks are greater with higher doses. Methadone is made by chemical synthesis and acts on opioid receptors. Methadone was developed in Germany in the late 1930s by Gustav Ehrhart and Max Bockmühl. It was approved for use as an analgesic in the United States in 1947, and has been used in the treatment of addiction since the 1960s. It is on the World Health Organization's List of Essential Medicines. Medical uses Opioid addiction Methadone is used for the treatment of opioid use disorder. It may be used as maintenance therapy or in shorter periods to manage opioid withdrawal symptoms. Its use for the treatment of addiction is usually strictly regulated. In the US, outpatient treatment programs must be certified by the federal Substance Abuse and Mental Health Services Administration (SAMHSA) and registered by the Drug Enforcement Administration (DEA) to prescribe methadone for opioid addiction. A 2009 Cochrane review found methadone was effective in retaining people in treatment and the reduction or cessation of heroin use as measured by self-report and urine/hair analysis and did not affect criminal activity or risk of death. Treatment of opioid-dependent persons with methadone follows one of two routes: maintenance or withdrawal management. Methadone maintenance therapy (MMT) usually takes place in outpatient settings. It is usually prescribed as a single daily dose medication for those who wish to abstain from illicit opioid use. Treatment models for MMT differ. It is not uncommon for treatment recipients to be administered methadone in a specialized clinic, where they are observed for around 15–20 minutes post-dosing, to reduce the risk of diversion of medication. The duration of methadone treatment programs ranges from a few months to years. Given opioid dependence is characteristically a chronic relapsing/remitting disorder, MMT may be lifelong. The length of time a person remains in treatment depends on a number of factors. While starting doses may be adjusted based on the amount of opioids reportedly used, most clinical guidelines suggest doses start low (e.g., at doses not exceeding 40 mg daily) and are incremented gradually. It has been found that doses of 40 mg per day were sufficient to help control the withdrawal symptoms but not enough to curb the cravings for the drug. Doses of 80 to 100 mg per day have shown higher rates of success in patients and less illicit heroin use during the maintenance therapy. However, higher doses do put a patient more at risk for overdose than a moderately low dose (e.g. 20 mg/day). Methadone maintenance has been shown to reduce the transmission of bloodborne viruses associated with opioid injection, such as hepatitis B and C, and/or HIV. The principal goals of methadone maintenance are to relieve opioid cravings, suppress the abstinence syndrome, and block the euphoric effects associated with opioids. Chronic methadone dosing will eventually lead to neuroadaptation, characterised by tolerance and dependence. However, when used correctly in treatment, maintenance therapy is medically safe, non-sedating, and can provide a slow recovery from opioid addiction. Methadone has been widely used for pregnant women addicted to opioids. Pain Methadone is used as an analgesic in chronic pain, often in rotation with other opioids. Due to its activity at the NMDA receptor, it may be more effective against neuropathic pain; for the same reason, tolerance to the analgesic effects may be less than that of other opioids. Adverse effects Adverse effects of methadone include: Sedation Constipation Flushing Perspiration Heat intolerance Dizziness or fainting Weakness Fatigue Drowsiness Constricted pupils Dry mouth Nausea and vomiting Low blood pressure Headache Heart problems such as chest pain or fast heartbeat Abnormal heart rhythms Respiratory problems such as trouble breathing, slow or shallow breathing (hypoventilation), lightheadedness, or fainting Weight gain Memory loss Itching Difficulty urinating Swelling of the hands, arms, feet, and legs Mood changes, (e.g, euphoria, disorientation) Blurred vision Decreased libido, difficulty in reaching orgasm, or impotence Missed menstrual periods Skin rash Central sleep apnea Withdrawal symptoms Methadone withdrawal symptoms are reported as being significantly more protracted than withdrawal from opioids with shorter half-lives. When used for opioid maintenance therapy, Methadone is generally administered as an oral liquid. Methadone has been implicated in contributing to significant tooth decay. Methadone causes dry mouth, reducing the protective role of saliva in preventing decay. Other putative mechanisms of methadone-related tooth decay include craving for carbohydrates related to opioids, poor dental care, and a general decrease in personal hygiene. These factors, combined with sedation, have been linked to the causation of extensive dental damage. Physical symptoms Lightheadedness Tearing of the eyes Mydriasis (dilated pupils) Photophobia (sensitivity to light) Hyperventilation (breathing that is too fast/deep) Runny nose Yawning Sneezing Nausea, vomiting, and diarrhea Fever Sweating Chills Tremors Akathisia (restlessness) Tachycardia (fast heartbeat) Aches and pains, often in the joints or legs Elevated pain sensitivity Blood pressure that is too high (hypertension, may cause a stroke) Cognitive symptoms Suicidal ideation Susceptibility to cravings Depression Spontaneous orgasm Prolonged insomnia Delirium Auditory hallucinations Visual hallucinations Increased perception of odors (olfaction), real or imagined Marked increase in sex drive Agitation Anxiety Panic disorder Nervousness Paranoia Delusions Apathy Anorexia (symptom) Black box warning Methadone has the following U.S. FDA black box warning: Risk of addiction and abuse Potentially fatal respiratory depression Lethal overdose in accidental ingestion QT prolongation Neonatal opioid withdrawal syndrome in children of pregnant women CYP450 drug interactions Risks when used with alcohol, benzodiazepines, and other CNS depressants. A certified opioid treatment program is required under federal law (42 CFR 8.12) when dispensing methadone for the treatment of opioid addiction. Overdose Most people who overdose on methadone show some of the following symptoms: Miosis (constricted pupils) Vomiting Spasms of the stomach and intestines Hypoventilation (breathing that is too slow/shallow) Drowsiness, sleepiness, disorientation, sedation, unresponsiveness Skin that is cool, clammy (damp), and pale Blue fingernails and lips Limp muscles, trouble staying awake, nausea Unconsciousness and coma The respiratory depression of an overdose can be treated with naloxone. Naloxone is preferred to the newer, longer-acting antagonist naltrexone. Despite methadone's much longer duration of action compared to heroin and other shorter-acting agonists and the need for repeat doses of the antagonist naloxone, it is still used for overdose therapy. As naltrexone has a longer half-life, it is more difficult to titrate. If too large a dose of the opioid antagonist is given to a dependent person, it will result in withdrawal symptoms (possibly severe). When using naloxone, the naloxone will be quickly eliminated and the withdrawal will be short-lived. Doses of naltrexone take longer to be eliminated from the person's system. A common problem in treating methadone overdoses is that given the short action of naloxone (versus the extremely longer-acting methadone), a dosage of naloxone given to a methadone-overdosed person will initially work to bring the person out of overdose, but once the naloxone wears off, if no further naloxone is administered, the person can go right back into overdose (based upon time and dosage of the methadone ingested). Tolerance and dependence As with other opioid medications, tolerance and dependence usually develop with repeated doses. There is some clinical evidence that tolerance to analgesia is less with methadone compared to other opioids; this may be due to its activity at the NMDA receptor. Tolerance to the different physiological effects of methadone varies; tolerance to analgesic properties may or may not develop quickly, but tolerance to euphoria usually develops rapidly, whereas tolerance to constipation, sedation, and respiratory depression develops slowly (if ever). Driving Methadone treatment may impair driving ability. Drug abusers had significantly more involvement in serious crashes than non-abusers in a study by the University of Queensland. In the study of a group of 220 drug abusers, most of them poly-drug abusers, 17 were involved in crashes killing people, compared with a control group of other people randomly selected having no involvement in fatal crashes. However, there have been multiple studies verifying the ability of methadone maintenance patients to drive. In the UK, persons who are prescribed oral methadone can continue to drive after they have satisfactorily completed an independent medical examination which will include a urine screen for drugs. The license will be issued for 12 months at a time and even then, only following a favourable assessment from their own doctor. Individuals who are prescribed methadone for either IV or IM administration cannot drive in the UK, mainly due to the increased sedation effects that this route of use can cause. Mortality In the United States, deaths linked to methadone more than quadrupled in the five-year period between 1999 and 2004. According to the U.S. National Center for Health Statistics, as well as a 2006 series in the Charleston Gazette (West Virginia), medical examiners listed methadone as contributing to 3,849 deaths in 2004. That number was up from 790 in 1999. Approximately 82 percent of those deaths were listed as accidental, and most deaths involved combinations of methadone with other drugs (especially benzodiazepines). Although deaths from methadone are on the rise, methadone-associated deaths are not being caused primarily by methadone intended for methadone treatment programs, according to a panel of experts convened by the Substance Abuse and Mental Health Services Administration, which released a report titled "Methadone-Associated Mortality, Report of a National Assessment". The consensus report concludes that "although the data remains incomplete, National Assessment meeting participants concurred that methadone tablets or Diskets distributed through channels other than opioid treatment programs most likely are the central factors in methadone-associated mortality." In 2006, the U.S. Food and Drug Administration issued a caution about methadone, titled "Methadone Use for Pain Control May Result in Death." The FDA also revised the drug's package insert. The change deleted previous information about the usual adult dosage. The Charleston Gazette reported, "The old language about the 'usual adult dose' was potentially deadly, according to pain specialists." Pharmacology Methadone acts by binding to the μ-opioid receptor, but also has some affinity for the NMDA receptor, an ionotropic glutamate receptor. Methadone is metabolized by CYP3A4, CYP2B6, CYP2D6, and is a substrate, or in this case target, for the P-glycoprotein efflux protein, a protein which helps pump foreign substances out of cells, in the intestines and brain. The bioavailability and elimination half-life of methadone are subject to substantial interindividual variability. Its main route of administration is oral. Adverse effects include sedation, hypoventilation, constipation, and miosis, in addition to tolerance, dependence, and withdrawal difficulties. The withdrawal period can be much more prolonged than with other opioids, spanning anywhere from two weeks to several months. The metabolic half-life of methadone differs from its duration of action. The metabolic half-life is 8 to 59 hours (approximately 24 hours for opioid-tolerant people, and 55 hours for opioid-naive people), as opposed to a half-life of 1 to 5 hours for morphine. The length of the half-life of methadone allows for the exhibition of respiratory depressant effects for an extended duration of time in opioid-naive people. Methadone at therapeutic concentrations is known to prolong the QTc interval, which indicates that the heart muscle repolarizes more slowly. This QTc prolongation tends to increase the risk of torsades de pointes (TdP), a heart rhythm disturbance that can lead to syncope or sudden death. In a large observational study in Sweden, methadone was associated with a particularly high incidence of TdP, especially in younger patients. The incidence of TdP was 41.9 cases per 100,000 users of methadone in the 18-64 year old age group. In this study of TdP, methadone was the highest-risk drug in the 18-64 year-old group, with the sole exception of the antiarrhythmic drug amiodarone, which was associated with 66.5 cases of TdP per 100,000 amiodarone users. The high incidence of TdP in amiodarone-treated patients may indicate correlation and not causation because amiodarone is often prescribed to patients with preexisting heart conditions that independently increase the risk of TdP. Methadone likely causes cardiac arrhythmias (such as TdP) via two mechanisms. Like many other cardiotoxic drugs, methadone blocks the hERG K+ channel. The two enantiomers of methadone inhibit hERG channels with different potency. Dextromethadone, which is less potent as an opioid, is more potent at blocking the hERG channel with an IC50 of ~12 μM. Levomethadone has a lower affinity, with an IC50 of ~29 μM at the hERG channel. Methadone is also known to block the Nav1.5 voltage-gated Na+ channel (SCN5A) with an IC50 of ~10 μM, which is similar to the local anesthetic bupivacaine. Both enantiomers of methadone block the Nav1.5 channel with similar affinities. Bupivacaine is especially cardiotoxic among local anesthetics, and it is believed to act via this same sodium channel. Plasma concentrations of methadone in recovering addicts can reach 4 μM during therapy, so the actions of methadone at both the hERG potassium channel and the Nav1.5 sodium channel are possibly clinically relevant in producing cardiac side effects. This also suggests that levomethadone is not completely free of cardiac toxicity. Mechanism of action Levomethadone (the R-(–)-methadone enantiomer) is a μ-opioid receptor agonist with higher intrinsic activity than morphine, but lower affinity. Dextromethadone (the S-(+)-methadone enantiomer) has a much lower affinity to the μ-opioid receptor than levomethadone. Both enantiomers bind to the glutamatergic NMDA (N-methyl--aspartate) receptor, acting as noncompetitive antagonists. Methadone has been shown to reduce neuropathic pain in rat models, primarily through NMDA receptor antagonism. NMDA antagonists such as dextromethorphan, ketamine, tiletamine and ibogaine are being studied for their role in decreasing the development of tolerance to opioids and as possible for eliminating addiction/tolerance/withdrawal, possibly by disrupting memory circuitry. Acting as an NMDA antagonist may be one mechanism by which methadone decreases craving for opioids and tolerance, and has been proposed as a possible mechanism for its distinguished efficacy regarding the treatment of neuropathic pain. Methadone also acted as a potent, noncompetitive α3β4 neuronal nicotinic acetylcholine receptor antagonist in rat receptors, expressed in human embryonic kidney cell lines. Metabolism Methadone has a slow metabolism and very high fat solubility, making it longer lasting than morphine-based drugs. Methadone has a typical elimination half-life of 15 to 60 hours with a mean of around 22. However, metabolism rates vary greatly between individuals, up to a factor of 100, ranging from as few as 4 hours to as many as 130 hours, or even 190 hours. This variability is apparently due to genetic variability in the production of the associated cytochrome enzymes CYP3A4, CYP2B6 and CYP2D6. Many substances can also induce, inhibit or compete with these enzymes further affecting (sometimes dangerously) methadone half-life. A longer half-life frequently allows for administration only once a day in opioid withdrawal management and maintenance programs. People who metabolize methadone rapidly, on the other hand, may require twice daily dosing to obtain sufficient symptom alleviation while avoiding excessive peaks and troughs in their blood concentrations and associated effects. This can also allow lower total doses in some such people. The analgesic activity is shorter than the pharmacological half-life; dosing for pain control usually requires multiple doses per day normally dividing daily dosage for administration at 8-hour intervals. The main metabolic pathway involves N-demethylation by CYP3A4 in the liver and intestine to give 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP). This inactive product, as well as the inactive 2-ethyl-5-methyl-3,3-diphenyl-1-pyrroline (EMDP), produced by a second N-demethylation, are detectable in the urine of those taking methadone. Route of administration The most common route of administration at a methadone clinic is in a racemic oral solution, though in Germany, only the R enantiomer (the L optical isomer) has traditionally been used, as it is responsible for most of the desired opioid effects. The single-isomer form is becoming less common due to the higher production costs. Methadone is available in traditional pills, sublingual tablets, and two different formulations designed for the person to drink. Drinkable forms include ready-to-dispense liquid (sold in the United States as Methadose), and Diskets (known on the street as "wafers" or "biscuits") tablets which are dispersible in water for oral administration, used similarly to Alka-Seltzer. The liquid form is the most common as it allows for smaller dose changes. Methadone is almost as effective when administered orally as by injection. Oral medication is usually preferable because it offers safety, and simplicity and represents a step away from injection-based drug abuse in those recovering from addiction. U.S. federal regulations require the oral form in addiction treatment programs. Injecting methadone pills can cause collapsed veins, bruising, swelling, and possibly other harmful effects. Methadone pills often contain talc that, when injected, produces a swarm of tiny solid particles in the blood, causing numerous minor blood clots. These particles cannot be filtered out before injection, and will accumulate in the body over time, especially in the lungs and eyes, producing various complications such as pulmonary hypertension, an irreversible and progressive disease. The formulation sold under the brand name Methadose (flavored liquid suspension for oral dosing, commonly used for maintenance purposes) should not be injected either. Information leaflets included in packs of UK methadone tablets state that the tablets are for oral use only and that use by any other route can cause serious harm. In addition to this warning, additives have now been included in the tablet formulation to make the use of them by the IV route more difficult. Methadone is also available in ampoules with strength of 50mg/ml & 10mg/ml for IV/IM/SC use in the UK. Prescribing the injectable formulation was more common in the 90s with prescribers reporting that up to 9-10% of all methadone prescription were for ampoules. This practice is much less common nowadays Chemistry Detection in biological fluids Methadone and its major metabolite, 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP), are often measured in urine as part of a drug abuse testing program, in plasma or serum to confirm a diagnosis of poisoning in hospitalized victims, or in whole blood to assist in a forensic investigation of a traffic or other criminal violation or a case of sudden death. Methadone usage history is considered in interpreting the results as a chronic user can develop tolerance to doses that would incapacitate an opioid-naïve individual. Chronic users often have high methadone and EDDP baseline values. Conformation The protonated form of methadone takes on an extended conformation, while the free base is more compact. In particular, it was found that there is an interaction between the tertiary amine and the carbonyl carbon of the ketone function (R3N ••• >C=O) that limits the molecule's conformation freedom, though the distance (291 pm by X-ray) is far too long to represent a true chemical bond. However, it does represent the initial trajectory of attack of an amine on a carbonyl group and was an important piece of experimental evidence for the proposal of the Bürgi–Dunitz angle for carbonyl addition reactions. History Methadone was developed in 1937 in Germany by scientists working for I.G. Farbenindustrie AG at the Farbwerke Hoechst who were looking for a synthetic opioid that could be created with readily available precursors, to solve Germany's opium and morphine shortage problem. On 11 September 1941 Bockmühl and Ehrhart filed an application for a patent for a synthetic substance they called Hoechst 10820 or Polamidon (a name still in regular use in Germany) and whose structure had little relation to morphine or other "true opiates" such as diamorphine (Heroin), desomorphine (Permonid), nicomorphine (Vilan), codeine, dihydrocodeine, oxymorphone (Opana), hydromorphone (Dilaudid), oxycodone (OxyContin), hydrocodone (Dicodid), and other closely related opium alkaloid derivatives and analogues. It was brought to market in 1943 and was widely used by the German army during WWII as a substitute for morphine. In the 1930s, pethidine (meperidine) went into production in Germany; however, the production of methadone, then being developed under the designation Hoechst 10820, was not carried forward because of side effects discovered in the early research. After the war, all German patents, trade names, and research records were requisitioned and expropriated by the Allies. The records on the research work of the I.G. Farbenkonzern at the Farbwerke Hoechst were confiscated by the U.S. Department of Commerce Intelligence, investigated by a Technical Industrial Committee of the U.S. Department of State and then brought to the US. The report published by the committee noted that while methadone itself was potentially addictive, it produced "considerably" less euphoria, sedation, and respiratory depression than morphine at equianalgesic doses and was thus interesting as a commercial drug. The same report also compared methadone to pethidine. German researchers reported that methadone was capable of producing strong morphine-like physical dependence, which is characterized by opioid withdrawal symptoms which are lesser in severity and intensity compared to morphine, but methadone was associated with a considerably prolonged or protracted withdrawal syndrome when compared to morphine. Morphine produced higher rates of self-administration and reinforcing behaviour in both human and animal subjects when compared to both methadone and pethidine. In comparison to equianalgesic doses of pethidine (Demerol), methadone was shown to produce less euphoria, but higher rates of constipation, and roughly equal levels of respiratory depression and sedation. In the early 1950s, methadone (most times the racemic HCl salts mixture) was also investigated for use as an antitussive. Isomethadone, noracymethadol, LAAM, and normethadone were first developed in Germany, the United Kingdom, Belgium, Austria, Canada, and the United States in the thirty or so years after the 1937 discovery of pethidine, the first synthetic opioid used in medicine. These synthetic opioids have increased length and depth of satiating any opiate cravings and generate very strong analgesic effects due to their long metabolic half-life and strong receptor affinity at the mu-opioid receptor sites. Therefore, they impart much of the satiating and anti-addictive effects of methadone by suppressing drug cravings. It was only in 1947 that the drug was given the generic name "methadone" by the Council on Pharmacy and Chemistry of the American Medical Association. Since the patent rights of the I.G. Farbenkonzern and Farbwerke Hoechst were no longer protected, each pharmaceutical company interested in the formula could buy the rights for the commercial production of methadone for just one dollar (MOLL 1990). Methadone was introduced into the United States in 1947 by Eli Lilly and Company as an analgesic under the trade name Dolophine. An urban myth later arose that Nazi leader Adolf Hitler ordered the manufacture of methadone or that the brand name 'Dolophine' was named after him, probably based on the similarity of "doloph" with "Adolph". (The pejorative term "adolphine" would appear in the early 1970s.) However, the name "Dolophine" was a contraction of "Dolo" from the Latin word dolor (pain), and finis, the Latin word for "end". Therefore, Dolophine literally means "pain end". Methadone was studied as a treatment for opioid addiction at the Addiction Research Center of the Narcotics Farm in Lexington, Kentucky in the 1950s, and by Rockefeller University physicians Robert Dole and Marie Nyswander in the 1960s in New York City. By 1976, methadone clinics had opened in cities including Chicago, New York, and New Haven, with some 38,000 patients treated in New York City alone. Society and culture Brand names Brand names include Dolophine, Symoron, Amidone, Methadose, Physeptone, Metadon, Metadol, Metadol-D, Heptanon and Heptadon among others. Economics In the US, generic methadone tablets are inexpensive, with retail prices ranging from $0.25 to $2.50 per defined daily dose. Methadone maintenance clinics in the US may be covered by private insurance, Medicaid, or Medicare. Medicare covers methadone under the prescription drug benefit, Medicare Part D, when it is prescribed for pain, but not when it is used for opioid dependence treatment because it cannot be dispensed in a retail pharmacy for this purpose. In California methadone maintenance treatment is covered under the medical benefit. Patients' eligibility for methadone maintenance treatment is most often contingent on them being enrolled in substance abuse counseling. People on methadone maintenance in the US either have to pay cash or if covered by insurance must complete a pre-determined number of hours per month in therapeutic groups or counseling. The United States Department of Veteran's Affairs (VA) Alcohol and Drug Dependence Rehabilitation Program offers methadone services to eligible veterans enrolled in the VA health care system. Methadone maintenance treatment (MMT) cost analyses often compare the cost of clinic visits versus the overall societal costs of illicit opioid use. A preliminary cost analysis conducted in 2016 by the US Department of Defense determined that methadone treatment, which includes psychosocial and support services, may cost an average of $126.00 per week or $6,552.00 per year. The average cost for one full year of methadone maintenance treatment is approximately $4,700 per patient, whereas one full year of imprisonment costs approximately $24,000 per person. Regulation United States and Canada Methadone is a Schedule I controlled substance in Canada and Schedule II in the United States, with an ACSCN of 9250 and a 2014 annual aggregate manufacturing quota of 31,875 kilos for sale. Methadone intermediate is also controlled, under ACSCN 9226 also under Schedule II, with a quota of 38,875 kilos. In most countries of the world, methadone is similarly restricted. The salts of methadone in use are the hydrobromide (free base conversion ratio 0.793), hydrochloride (0.894), and HCl monohydrate (0.850). Methadone is also regulated internationally as a Schedule I controlled substance under the United Nations Single Convention on Narcotic Drugs of 1961. Methadone clinics In the United States, prescription of methadone requires intensive monitoring and must be obtained in-person from an Opioid Treatment Program—colloquially known as a 'methadone clinic'—when prescribed for opioid use disorder (OUD). According to federal laws, methadone cannot be prescribed by a doctor and obtained from a pharmacy to treat addiction. Because of its long half-life, methadone is almost invariably prescribed to be taken in a single daily dose. At nearly all methadone clinics in the US, patients must visit a clinic to receive and take their dose under the supervision of a nurse. Both patients who are new to methadone treatment and high-risk patients—such as those who are using drugs and alcohol, including cannabis in some states—must visit the clinic daily. Other countries In Russia, methadone treatment is illegal. In 2008, the Chief Sanitary Inspector of Russia Gennadiy Onishchenko, claimed that Russian health officials were not convinced of methadone's efficacy in treating heroin and/or opioid addiction. Instead of replacement therapy and gradual reduction of illicit drug use, Russian doctors encouraged immediate cessation and withdrawal. People who use drugs were generally given sedatives and non-opioid analgesics to cope with withdrawal symptoms. Brazilian footballer assistant Robson Oliveira was arrested in 2019 upon arriving in Russia with methadone tablets sold legally in other countries for what was considered drug trafficking under Russian law. As of 2015, China had the largest methadone maintenance treatment program with over 250,000 people in over 650 clinics in 27 provinces.
Biology and health sciences
Pain treatments
Health
20963
https://en.wikipedia.org/wiki/M%C3%B6bius%20inversion%20formula
Möbius inversion formula
In mathematics, the classic Möbius inversion formula is a relation between pairs of arithmetic functions, each defined from the other by sums over divisors. It was introduced into number theory in 1832 by August Ferdinand Möbius. A large generalization of this formula applies to summation over an arbitrary locally finite partially ordered set, with Möbius' classical formula applying to the set of the natural numbers ordered by divisibility: see incidence algebra. Statement of the formula The classic version states that if and are arithmetic functions satisfying then where is the Möbius function and the sums extend over all positive divisors of (indicated by in the above formulae). In effect, the original can be determined given by using the inversion formula. The two sequences are said to be Möbius transforms of each other. The formula is also correct if and are functions from the positive integers into some abelian group (viewed as a -module). In the language of Dirichlet convolutions, the first formula may be written as where denotes the Dirichlet convolution, and is the constant function . The second formula is then written as Many specific examples are given in the article on multiplicative functions. The theorem follows because is (commutative and) associative, and , where is the identity function for the Dirichlet convolution, taking values , for all . Thus . Replacing by , we obtain the product version of the Möbius inversion formula: Series relations Let so that is its transform. The transforms are related by means of series: the Lambert series and the Dirichlet series: where is the Riemann zeta function. Repeated transformations Given an arithmetic function, one can generate a bi-infinite sequence of other arithmetic functions by repeatedly applying the first summation. For example, if one starts with Euler's totient function , and repeatedly applies the transformation process, one obtains: the totient function , where is the identity function , the divisor function If the starting function is the Möbius function itself, the list of functions is: , the Möbius function where is the unit function , the constant function , where is the number of divisors of , (see divisor function). Both of these lists of functions extend infinitely in both directions. The Möbius inversion formula enables these lists to be traversed backwards. As an example the sequence starting with is: The generated sequences can perhaps be more easily understood by considering the corresponding Dirichlet series: each repeated application of the transform corresponds to multiplication by the Riemann zeta function. Generalizations A related inversion formula more useful in combinatorics is as follows: suppose and are complex-valued functions defined on the interval such that then Here the sums extend over all positive integers which are less than or equal to . This in turn is a special case of a more general form. If is an arithmetic function possessing a Dirichlet inverse , then if one defines then The previous formula arises in the special case of the constant function , whose Dirichlet inverse is . A particular application of the first of these extensions arises if we have (complex-valued) functions and defined on the positive integers, with By defining and , we deduce that A simple example of the use of this formula is counting the number of reduced fractions , where and are coprime and . If we let be this number, then is the total number of fractions with , where and are not necessarily coprime. (This is because every fraction with and can be reduced to the fraction with , and vice versa.) Here it is straightforward to determine , but is harder to compute. Another inversion formula is (where we assume that the series involved are absolutely convergent): As above, this generalises to the case where is an arithmetic function possessing a Dirichlet inverse : For example, there is a well known proof relating the Riemann zeta function to the prime zeta function that uses the series-based form of Möbius inversion in the previous equation when . Namely, by the Euler product representation of for These identities for alternate forms of Möbius inversion are found in. A more general theory of Möbius inversion formulas partially cited in the next section on incidence algebras is constructed by Rota in. Multiplicative notation As Möbius inversion applies to any abelian group, it makes no difference whether the group operation is written as addition or as multiplication. This gives rise to the following notational variant of the inversion formula: Proofs of generalizations The first generalization can be proved as follows. We use Iverson's convention that [condition] is the indicator function of the condition, being 1 if the condition is true and 0 if false. We use the result that that is, , where is the unit function. We have the following: The proof in the more general case where replaces 1 is essentially identical, as is the second generalisation. On posets For a poset , a set endowed with a partial order relation , define the Möbius function of recursively by (Here one assumes the summations are finite.) Then for , where is a commutative ring, we have if and only if (See Stanley's Enumerative Combinatorics, Vol 1, Section 3.7.) The classical arithmetic Mobius function is the special case of the poset P of positive integers ordered by divisibility: that is, for positive integers s, t, we define the partial order to mean that s is a divisor of t. Contributions of Weisner, Hall, and Rota
Mathematics
Subdisciplines
null
382683
https://en.wikipedia.org/wiki/Landing%20gear
Landing gear
Landing gear is the undercarriage of an aircraft or spacecraft that is used for taxiing, takeoff or landing. For aircraft, it is generally needed for all three of these. It was also formerly called alighting gear by some manufacturers, such as the Glenn L. Martin Company. For aircraft, Stinton makes the terminology distinction undercarriage (British) = landing gear (US). For aircraft, the landing gear supports the craft when it is not flying, allowing it to take off, land, and taxi without damage. Wheeled landing gear is the most common, with skis or floats needed to operate from snow/ice/water and skids for vertical operation on land. Retractable undercarriages fold away during flight, which reduces drag, allowing for faster airspeeds. Landing gear must be strong enough to support the aircraft and its design affects the weight, balance and performance. It often comprises three wheels, or wheel-sets, giving a tripod effect. Some unusual landing gear have been evaluated experimentally. These include: no landing gear (to save weight), made possible by operating from a catapult cradle and flexible landing deck: air cushion (to enable operation over a wide range of ground obstacles and water/snow/ice); tracked (to reduce runway loading). For launch vehicles and spacecraft landers, the landing gear usually only supports the vehicle on landing and during subsequent surface movement, and is not used for takeoff. Given their varied designs and applications, there exist dozens of specialized landing gear manufacturers. The three largest are Safran Landing Systems, Collins Aerospace (part of Raytheon Technologies) and Héroux-Devtek. Aircraft The landing gear represents 2.5 to 5% of the maximum takeoff weight (MTOW) and 1.5 to 1.75% of the aircraft cost, but 20% of the airframe direct maintenance cost. A suitably-designed wheel can support , tolerate a ground speed of 300 km/h and roll a distance of ; it has a 20,000 hours time between overhaul and a 60,000 hours or 20 year life time. Gear arrangements Wheeled undercarriages normally come in two types: Conventional landing gear or "taildragger", where there are two main wheels towards the front of the aircraft and a single, much smaller, wheel or skid at the rear. The same helicopter arrangement is called tricycle tailwheel. Tricycle landing gear, where there are two main wheels (or wheel assemblies) under the wings and a third smaller wheel in the nose. PZL.37 Łoś Was the first bomber aircraft with twin wheels on a single shock absorber. The same helicopter arrangement is called tricycle nosewheel. The taildragger arrangement was common during the early propeller era, as it allows more room for propeller clearance. Most modern aircraft have tricycle undercarriages. Taildraggers are considered harder to land and take off (because the arrangement is usually unstable, that is, a small deviation from straight-line travel will tend to increase rather than correct itself), and usually require special pilot training. A small tail wheel or skid/bumper may be added to a tricycle undercarriage to prevent damage to the underside of the fuselage if over-rotation occurs on take-off leading to a tail strike. Aircraft with tail-strike protection include the B-29 Superfortress, Boeing 727 trijet and Concorde. Some aircraft with retractable conventional landing gear have a fixed tailwheel. Hoerner estimated the drag of the Bf 109 fixed tailwheel and compared it with that of other protrusions such as the pilot's canopy. A third arrangement (known as tandem or bicycle) has the main and nose gear located fore and aft of the center of gravity (CG) under the fuselage with outriggers on the wings. This is used when there is no convenient location on either side of the fuselage to attach the main undercarriage or to store it when retracted. Examples include the Lockheed U-2 spy plane and the Harrier jump jet. The Boeing B-52 uses a similar arrangement, except that the fore and aft gears each have two twin-wheel units side by side. Quadricycle gear is similar to bicycle but with two sets of wheels displaced laterally in the fore and aft positions. Raymer classifies the B-52 gear as quadricycle. The experimental Fairchild XC-120 Packplane had quadricycle gear located in the engine nacelles to allow unrestricted access beneath the fuselage for attaching a large freight container. Helicopters use skids, pontoons or wheels depending on their size and role. Retractable gear To decrease drag in flight, undercarriages retract into the wings and/or fuselage with wheels flush with the surrounding surface, or concealed behind flush-mounted doors; this is called retractable gear. If the wheels do not retract completely but protrude partially exposed to the airstream, it is called a semi-retractable gear. Most retractable gear is hydraulically operated, though some is electrically operated or even manually operated on very light aircraft. The landing gear is stowed in a compartment called a wheel well. Pilots confirming that their landing gear is down and locked refer to "three greens" or "three in the green.", a reference to the electrical indicator lights (or painted panels of mechanical indicator units) from the nosewheel/tailwheel and the two main gears. Blinking green lights or red lights indicate the gear is in transit and neither up and locked or down and locked. When the gear is fully stowed up with the up-locks secure, the lights often extinguish to follow the dark cockpit philosophy; some airplanes have gear up indicator lights. Redundant systems are used to operate the landing gear and redundant main gear legs may also be provided so the aircraft can be landed in a satisfactory manner in a range of failure scenarios. The Boeing 747 was given four separate and independent hydraulic systems (when previous airliners had two) and four main landing gear posts (when previous airliners had two). Safe landing would be possible if two main gear legs were torn off provided they were on opposite sides of the fuselage. In the case of power failure in a light aircraft, an emergency extension system is always available. This may be a manually operated crank or pump, or a mechanical free-fall mechanism which disengages the uplocks and allows the landing gear to fall under gravity. Shock absorbers Aircraft landing gear includes wheels equipped with solid shock absorbers on light planes, and air/oil oleo struts on larger aircraft. Large aircraft As aircraft weights have increased more wheels have been added and runway thickness has increased to keep within the runway loading limit. The Zeppelin-Staaken R.VI, a large German World War I long-range bomber of 1916, used eighteen wheels for its undercarriage, split between two wheels on its nose gear struts, and sixteen wheels on its main gear units—split into four side-by-side quartets each, two quartets of wheels per side—under each tandem engine nacelle, to support its loaded weight of almost . Multiple "tandem wheels" on an aircraft—particularly for cargo aircraft, mounted to the fuselage lower sides as retractable main gear units on modern designs—were first seen during World War II, on the experimental German Arado Ar 232 cargo aircraft, which used a row of eleven "twinned" fixed wheel sets directly under the fuselage centerline to handle heavier loads while on the ground. Many of today's large cargo aircraft use this arrangement for their retractable main gear setups, usually mounted on the lower corners of the central fuselage structure. The prototype Convair XB-36 had most of its weight on two main wheels, which needed runways at least thick. Production aircraft used two four-wheel bogies, allowing the aircraft to use any airfield suitable for a B-29. A relatively light Lockheed JetStar business jet, with four wheels supporting , needed a thick flexible asphalt pavement. The Boeing 727-200 with four tires on two legs main landing gears required a thick pavement. The thickness rose to for a McDonnell Douglas DC-10-10 with supported on eight wheels on two legs. The heavier, , DC-10-30/40 were able to operate from the same thickness pavements with a third main leg for ten wheels, like the first Boeing 747-100, weighing on four legs and 16 wheels. The similar-weight Lockheed C-5, with 24 wheels, needs an pavement. The twin-wheel unit on the fuselage centerline of the McDonnell Douglas DC-10-30/40 was retained on the MD-11 airliner and the same configuration was used on the initial Airbus A340-200/300, which evolved in a complete four-wheel undercarriage bogie for the heavier Airbus A340-500/-600. The up to Boeing 777 has twelve main wheels on two three-axles bogies, like the later Airbus A350. The Airbus A380 has a four-wheel bogie under each wing with two sets of six-wheel bogies under the fuselage. The Antonov An-225, the largest cargo aircraft, had 4 wheels on the twin-strut nose gear units like the smaller Antonov An-124, and 28 main gear wheels. The A321neo has a twin-wheel main gear inflated to 15.7 bar (228 psi), while the A350-900 has a four-wheel main gear inflated to 17.1 bar (248 psi). STOL aircraft STOL aircraft have a higher sink-rate requirement if a carrier-type, no-flare landing technique has to be adopted to reduce touchdown scatter. For example, the Saab 37 Viggen, with landing gear designed for a 5m/sec impact, could use a carrier-type landing and HUD to reduce its scatter from 300 m to 100m. The de Havilland Canada DHC-4 Caribou used long-stroke legs to land from a steep approach with no float. Operation from water A flying boat has a lower fuselage with the shape of a boat hull giving it buoyancy. Wing-mounted floats or stubby wing-like sponsons are added for stability. Sponsons are attached to the lower sides of the fuselage. A floatplane has two or three streamlined floats. Amphibious floats have retractable wheels for land operation. An amphibious aircraft or amphibian usually has two distinct landing gears, namely a "boat" hull/floats and retractable wheels, which allow it to operate from land or water. Beaching gear is detachable wheeled landing gear that allows a non-amphibious floatplane or flying boat to be maneuvered on land. It is used for aircraft maintenance and storage and is either carried in the aircraft or kept at a slipway. Beaching gear may consist of individual detachable wheels or a cradle that supports the entire aircraft. In the former case, the beaching gear is manually attached or detached with the aircraft in the water; in the latter case, the aircraft is maneuvered onto the cradle. Helicopters are able to land on water using floats or a hull and floats. For take-off a step and planing bottom are required to lift from the floating position to planing on the surface. For landing a cleaving action is required to reduce the impact with the surface of the water. A vee bottom parts the water and chines deflect the spray to prevent it damaging vulnerable parts of the aircraft. Additional spray control may be needed using spray strips or inverted gutters. A step is added to the hull, just behind the center of gravity, to stop water clinging to the afterbody so the aircraft can accelerate to flying speed. The step allows air, known as ventilation air, to break the water suction on the afterbody. Two steps were used on the Kawanishi H8K. A step increases the drag in flight. The drag contribution from the step can be reduced with a fairing. A faired step was introduced on the Short Sunderland III. One goal of seaplane designers was the development of an open ocean seaplane capable of routine operation from very rough water. This led to changes in seaplane hull configuration. High length/beam ratio hulls and extended afterbodies improved rough water capabilities. A hull much longer than its width also reduced drag in flight. An experimental development of the Martin Marlin, the Martin M-270, was tested with a new hull with a greater length/beam ratio of 15 obtained by adding 6 feet to both the nose and tail. Rough-sea capability can be improved with lower take-off and landing speeds because impacts with waves are reduced. The Shin Meiwa US-1A is a STOL amphibian with blown flaps and all control surfaces. The ability to land and take-off at relatively low speeds of about 45 knots and the hydrodynamic features of the hull, long length/beam ratio and inverted spray gutter for example, allow operation in wave heights of 15 feet. The inverted gutters channel spray to the rear of the propeller discs. Low speed maneuvring is necessary between slipways and buoys and take-off and landing areas. Water rudders are used on seaplanes ranging in size from the Republic RC-3 Seabee to the Beriev A-40 Hydro flaps were used on the Martin Marlin and Martin SeaMaster. Hydroflaps, submerged at the rear of the afterbody, act as a speed brake or differentially as a rudder. A fixed fin, known as a skeg, has been used for directional stability. A skeg, was added to the second step on the Kawanishi H8K flying boat hull. High speed impacts in rough water between the hull and wave flanks may be reduced using hydro-skis which hold the hull out of the water at higher speeds. Hydro skis replace the need for a boat hull and only require a plain fuselage which planes at the rear. Alternatively skis with wheels can be used for land-based aircraft which start and end their flight from a beach or floating barge. Hydro-skis with wheels were demonstrated as an all-purpose landing gear conversion of the Fairchild C-123, known as the Panto-base Stroukoff YC-134. A seaplane designed from the outset with hydro-skis was the Convair F2Y Sea Dart prototype fighter. The skis incorporated small wheels, with a third wheel on the fuselage, for ground handling. In the 1950s hydro-skis were envisaged as a ditching aid for large piston-engined aircraft. Water-tank tests done using models of the Lockheed Constellation, Douglas DC-4 and Lockheed Neptune concluded that chances of survival and rescue would be greatly enhanced by preventing critical damage associated with ditching. Shipboard operation The landing gear on fixed-wing aircraft that land on aircraft carriers have a higher sink-rate requirement because the aircraft are flown onto the deck with no landing flare. Other features are related to catapult take-off requirements for specific aircraft. For example, the Blackburn Buccaneer was pulled down onto its tail-skid to set the required nose-up attitude. The naval McDonnell Douglas F-4 Phantom II in UK service needed an extending nosewheel leg to set the wing attitude at launch. The landing gear for an aircraft using a ski-jump on take-off is subjected to loads of 0.5g which also last for much longer than a landing impact. Helicopters may have a deck-lock harpoon to anchor them to the deck. In-flight use Some aircraft have a requirement to use the landing-gear as a speed brake. Flexible mounting of the stowed main landing-gear bogies on the Tupolev Tu-22R raised the aircraft flutter speed to . The bogies oscillated within the nacelle under the control of dampers and springs as an anti-flutter device. Gear common to different aircraft Some experimental aircraft have used gear from existing aircraft to reduce program costs. The Martin-Marietta X-24 lifting body used the nose/main gear from the North American T-39 / Northrop T-38 and the Grumman X-29 from the Northrop F-5 / General Dynamics F-16. Other types Skids Skids has been used on aircraft landing gear. The North American X-15 used skids as the rear landing gear and the Rockwell HiMAT used them in testing. When an airplane needs to land on surfaces covered by snow, the landing gear usually consists of skis or a combination of wheels and skis. Detachable Some aircraft use wheels for takeoff and jettison them when airborne for improved streamlining without the complexity, weight and space requirements of a retraction mechanism. The wheels are sometimes mounted onto axles that are part of a separate "dolly" (for main wheels only) or "trolley" (for a three-wheel set with a nosewheel) chassis. Landing is done on skids or similar simple devices (fixed or retractable). The SNCASE Baroudeur used this arrangement. Historical examples include the "dolly"-using Messerschmitt Me 163 Komet rocket fighter, the Messerschmitt Me 321 Gigant troop glider, and the first eight "trolley"-using prototypes of the Arado Ar 234 jet reconnaissance bomber. The main disadvantage to using the takeoff dolly/trolley and landing skid(s) system on German World War II aircraft—intended for a sizable number of late-war German jet and rocket-powered military aircraft designs—was that aircraft would likely be scattered all over a military airfield after they had landed from a mission, and would be unable to taxi on their own to an appropriately hidden "dispersal" location, which could easily leave them vulnerable to being shot up by attacking Allied fighters. A related contemporary example are the wingtip support wheels ("pogos") on the Lockheed U-2 reconnaissance aircraft, which fall away after take-off and drop to earth; the aircraft then relies on titanium skids on the wingtips for landing. Rearwards and sideways retraction Some main landing gear struts on World War II aircraft, in order to allow a single-leg main gear to more efficiently store the wheel within either the wing or an engine nacelle, rotated the single gear strut through a 90° angle during the rearwards-retraction sequence to allow the main wheel to rest "flat" above the lower end of the main gear strut, or flush within the wing or engine nacelles, when fully retracted. Examples are the Curtiss P-40, Vought F4U Corsair, Grumman F6F Hellcat, Messerschmitt Me 210 and Junkers Ju 88. The Aero Commander family of twin-engined business aircraft also shares this feature on the main gears, which retract aft into the ends of the engine nacelles. The rearward-retracting nosewheel strut on the Heinkel He 219 and the forward-retracting nose gear strut on the later Cessna Skymaster similarly rotated 90 degrees as they retracted. On most World War II single-engined fighter aircraft (and even one German heavy bomber design) with sideways retracting main gear, the main gear that retracted into the wings was raked forward in the "down" position for better ground handling, with a retracted position that placed the main wheels at some distance aft of their position when downairframe—this led to a complex angular geometry for setting up the "pintle" angles at the top ends of the struts for the retraction mechanism's axis of rotation. with some aircraft, like the P-47 Thunderbolt and Grumman Bearcat, even mandating that the main gear struts lengthened as they were extended to give sufficient ground clearance for their large four-bladed propellers. One exception to the need for this complexity in many WW II fighter aircraft was Japan's famous Zero fighter, whose main gear stayed at a perpendicular angle to the centerline of the aircraft when extended, as seen from the side. Variable axial position of main wheels The main wheels on the Vought F7U Cutlass could move 20 inches between a forward and aft position. The forward position was used for take-off to give a longer lever-arm for pitch control and greater nose-up attitude. The aft position was used to reduce landing bounce and reduce risk of tip-back during ground handling. Tandem layout The tandem or bicycle layout is used on the Hawker Siddeley Harrier, which has two main-wheels behind a single nose-wheel under the fuselage and a smaller wheel near the tip of each wing. On second generation Harriers, the wing is extended past the outrigger wheels to allow greater wing-mounted munition loads to be carried, or to permit wing-tip extensions to be bolted on for ferry flights. A tandem layout was evaluated by Martin using a specially-modified Martin B-26 Marauder (the XB-26H) to evaluate its use on Martin's first jet bomber, the Martin XB-48. This configuration proved so manoeuvrable that it was also selected for the B-47 Stratojet. It was also used on the U-2, Myasishchev M-4, Yakovlev Yak-25, Yak-28 and Sud Aviation Vautour. A variation of the multi tandem layout is also used on the B-52 Stratofortress which has four main wheel bogies (two forward and two aft) underneath the fuselage and a small outrigger wheel supporting each wing-tip. The B-52's landing gear is also unique in that all four pairs of main wheels can be steered. This allows the landing gear to line up with the runway and thus makes crosswind landings easier (using a technique called crab landing). Since tandem aircraft cannot rotate for takeoff, the forward gear must be long enough to give the wings the correct angle of attack during takeoff. During landing, the forward gear must not touch the runway first, otherwise the rear gear will slam down and may cause the aircraft to bounce and become airborne again. Crosswind landing accommodation One very early undercarriage incorporating castoring for crosswind landings was pioneered on the Bleriot VIII design of 1908. It was later used in the much more famous Blériot XI Channel-crossing aircraft of 1909 and also copied in the earliest examples of the Etrich Taube. In this arrangement the main landing gear's shock absorption was taken up by a vertically sliding bungee cord-sprung upper member. The vertical post along which the upper member slid to take landing shocks also had its lower end as the rotation point for the forward end of the main wheel's suspension fork, allowing the main gear to pivot on moderate crosswind landings. Manually-adjusted main-gear units on the B-52 can be set for crosswind take-offs. It rarely has to be used from SAC-designated airfields which have major runways in the predominant strongest wind direction. The Lockheed C-5 Galaxy has swivelling 6-wheel main units for crosswind landings and castoring rear units to prevent tire scrubbing on tight turns. "Kneeling" gear Both the nosegear and the wing-mounted main landing gear of the World War II German Arado Ar 232 cargo/transport aircraft were designed to kneel. This made it easier to load and unload cargo, and improved taxiing over ditches and on soft ground. Some early U.S. Navy jet fighters were equipped with "kneeling" nose gear consisting of small steerable auxiliary wheels on short struts located forward of the primary nose gear, allowing the aircraft to be taxied tail-high with the primary nose gear retracted. This feature was intended to enhance safety aboard aircraft carriers by redirecting the hot exhaust blast upwards, and to reduce hangar space requirements by enabling the aircraft to park with its nose underneath the tail of a similarly equipped jet. Kneeling gear was used on the North American FJ-1 Fury and on early versions of the McDonnell F2H Banshee, but was found to be of little use operationally, and was omitted from later Navy fighters. The nosewheel on the Lockheed C-5, partially retracts against a bumper to assist in loading and unloading of cargo using ramps through the forward, "tilt-up" hinged fuselage nose while stationary on the ground. The aircraft also tilts backwards. The Messier twin-wheel main units fitted to the Transall and other cargo aircraft can tilt forward or backward as necessary. The Boeing AH-64 Apache helicopter is able to kneel to fit inside the cargo hold of a transport aircraft and for storage. Tail support Aircraft landing gear includes devices to prevent fuselage contact with the ground by tipping back when the aircraft is being loaded. Some commercial aircraft have used tail props when parked at the gate. The Douglas C-54 had a critical CG location which required a ground handling strut. The Lockheed C-130 and Boeing C-17 Globemaster III use ramp supports. The unladen CG of the rear-engined Ilyushin IL-62 is aft of the main gear due to design decisions stemming from efforts to reduce overall weight, systems complexity and drag; to prevent the fuselage from tilting back when unloaded, the aircraft has a unique fully retractable vertical tail strut with castering wheels to allow towing or pushback. The strut is not intended for taxiing or flight, when the weight of the crew, passengers, cargo and fuel provide the necessary fore-aft balance. Monowheel To minimize drag, modern gliders usually have a single wheel, retractable or fixed, centered under the fuselage, which is referred to as monowheel gear or monowheel landing gear. Monowheel gear is also used on some powered aircraft, where drag reduction is a priority, such as the Europa Classic. Much like the Me 163 rocket fighter, some gliders from prior to the Second World War used a take-off dolly that was jettisoned on take-off; these gliders then landed on a fixed skid. This configuration is necessarily accompanied with a taildragger. Helicopters Light helicopters use simple landing skids to save weight and cost. The skids may have attachment points for wheels so that they can be moved for short distances on the ground. Skids are impractical for helicopters weighing more than four tons. Some high-speed machines have retractable wheels, but most use fixed wheels for their robustness, and to avoid the need for a retraction mechanism. Tailsitter Experimental tailsitter aircraft use landing gear located in their tails for VTOL operation. Light aircraft For light aircraft a type of landing gear which is economical to produce is a simple wooden arch laminated from ash, as used on some homebuilt aircraft. A similar arched gear is often formed from spring steel. The Cessna Airmaster was among the first aircraft to use spring steel landing gear. The main advantage of such gear is that no other shock-absorbing device is needed; the deflecting leaf provides the shock absorption. Folding gear The limited space available to stow landing gear has led to many complex retraction mechanisms, each unique to a particular aircraft. An early example, the German Bomber B combat aircraft design competition winner, the Junkers Ju 288, had a complex "folding" main landing gear unlike any other aircraft designed by either Axis or Allied sides in the war: its single oleo strut was only attached to the lower end of its Y-form main retraction struts, handling the twinned main gear wheels, and folding by swiveling downwards and aftwards during retraction to "fold" the maingear's length to shorten it for stowage in the engine nacelle it was mounted in. However, the single pivot-point design also led to numerous incidents of collapsed maingear units for its prototype airframes. Tracked Increased contact area can be obtained with very large wheels, many smaller wheels or track-type gear. Tracked gear made by Dowty was fitted to a Westland Lysander in 1938 for taxi tests, then a Fairchild Cornell and a Douglas Boston. Bonmartini, in Italy, fitted tracked gear to a Piper Cub in 1951. Track-type gear was also tested using a C-47, C-82 and B-50. A much heavier aircraft, an XB-36, was made available for further tests, although there was no intention of using it on production aircraft. The stress on the runway was reduced to one third that of the B-36 four-wheel bogie. Ground carriage Ground carriage is a long-term (after 2030) concept of flying without landing gear. It is one of many aviation technologies being proposed to reduce greenhouse gas emissions. Leaving the landing gear on the ground reduces weight and drag. Leaving it behind after take-off was done for a different reason, i.e. with military objectives, during World War II using the "dolly" and "trolley" arrangements of the German Me 163B rocket fighter and Arado Ar 234A prototype jet recon-bomber. Steering There are several types of steering. Taildragger aircraft may be steered by rudder alone (depending upon the prop wash produced by the aircraft to turn it) with a freely pivoting tail wheel, or by a steering linkage with the tail wheel, or by differential braking (the use of independent brakes on opposite sides of the aircraft to turn the aircraft by slowing one side more sharply than the other). Aircraft with tricycle landing gear usually have a steering linkage with the nosewheel (especially in large aircraft), but some allow the nosewheel to pivot freely and use differential braking and/or the rudder to steer the aircraft, like the Cirrus SR22. Some aircraft require that the pilot steer by using rudder pedals; others allow steering with the yoke or control stick. Some allow both. Still others have a separate control, called a tiller, used for steering on the ground exclusively. Rudder When an aircraft is steered on the ground exclusively using the rudder, it needs a substantial airflow past the rudder, which can be generated either by the forward motion of the aircraft or by propeller slipstream. Rudder steering requires considerable practice to use effectively. Although it needs airflow past the rudder, it has the advantage of not needing any friction with the ground, which makes it useful for aircraft on water, snow or ice. Direct Some aircraft link the yoke, control stick, or rudder directly to the wheel used for steering. Manipulating these controls turns the steering wheel (the nose wheel for tricycle landing gear, and the tail wheel for taildraggers). The connection may be a firm one in which any movement of the controls turns the steering wheel (and vice versa), or it may be a soft one in which a spring-like mechanism twists the steering wheel but does not force it to turn. The former provides positive steering but makes it easier to skid the steering wheel; the latter provides softer steering (making it easy to overcontrol) but reduces the probability of skidding. Aircraft with retractable gear may disable the steering mechanism wholly or partially when the gear is retracted. Differential braking Differential braking depends on asymmetric application of the brakes on the main gear wheels to turn the aircraft. For this, the aircraft must be equipped with separate controls for the right and left brakes (usually on the rudder pedals). The nose or tail wheel usually is not equipped with brakes. Differential braking requires considerable skill. In aircraft with several methods of steering that include differential braking, differential braking may be avoided because of the wear it puts on the braking mechanisms. Differential braking has the advantage of being largely independent of any movement or skidding of the nose or tailwheel. Tiller A tiller in an aircraft is a small wheel or lever, sometimes accessible to one pilot and sometimes duplicated for both pilots, that controls the steering of the aircraft while it is on the ground. The tiller may be designed to work in combination with other controls such as the rudder or yoke. In large airliners, for example, the tiller is often used as the sole means of steering during taxi, and then the rudder is used to steer during takeoff and landing, so that both aerodynamic control surfaces and the landing gear can be controlled simultaneously when the aircraft is moving at aerodynamic speeds. Tires and wheels The specified selection criterion, e.g., minimum size, weight, or pressure, are used to select suitable tires and wheels from manufacturer's catalog and industry standards found in the Aircraft Yearbook published by the Tire and Rim Association, Inc. Gear loading The choice of the main wheel tires is made on the basis of the static loading case. The total main gear load is calculated assuming that the aircraft is taxiing at low speed without braking: where is the weight of the aircraft and and are the distance measured from the aircraft's center of gravity(cg) to the main and nose gear, respectively. The choice of the nose wheel tires is based on the nose wheel load during braking at maximum effort: where is the lift, is the drag, is the thrust, and is the height of aircraft cg from the static groundline. Typical values for on dry concrete vary from 0.35 for a simple brake system to 0.45 for an automatic brake pressure control system. As both and are positive, the maximum nose gear load occurs at low speed. Reverse thrust decreases the nose gear load, and hence the condition results in the maximum value: To ensure that the rated loads will not be exceeded in the static and braking conditions, a seven percent safety factor is used in the calculation of the applied loads. Inflation pressure Provided that the wheel load and configuration of the landing gear remain unchanged, the weight and volume of the tire will decrease with an increase in inflation pressure. From the flotation standpoint, a decrease in the tire contact area will induce a higher bearing stress on the pavement which may reduce the number of airfields available to the aircraft. Braking will also become less effective due to a reduction in the frictional force between the tires and the ground. In addition, the decrease in the size of the tire, and hence the size of the wheel, could pose a problem if internal brakes are to be fitted inside the wheel rims. The arguments against higher pressure are of such a nature that commercial operators generally prefer the lower pressures in order to maximize tire life and minimize runway stress. To prevent punctures from stones Philippine Airlines had to operate their Hawker Siddeley 748 aircraft with pressures as low as the tire manufacturer would permit. However, too low a pressure can lead to an accident as in the Nigeria Airways Flight 2120. A rough general rule for required tire pressure is given by the manufacturer in their catalog. Goodyear for example advises the pressure to be 4% higher than required for a given weight or as fraction of the rated static load and inflation. Tires of many commercial aircraft are required to be filled with nitrogen, and not subsequently diluted with more than 5% oxygen, to prevent auto-ignition of the gas which may result from overheating brakes producing volatile vapors from the tire lining. Naval aircraft use different pressures when operating from a carrier and ashore. For example, the Northrop Grumman E-2 Hawkeye tire pressures are on ship and ashore. En-route deflation is used in the Lockheed C-5 Galaxy to suit airfield conditions at the destination but adds excessive complication to the landing gear and wheels Future developments Airport community noise is an environmental issue which has brought into focus the contribution of aerodynamic noise from the landing gear. A NASA long-term goal is to confine aircraft objectional noise to within the airport boundary. During the approach to land the landing gear is lowered several miles from touchdown and the landing gear is the dominant airframe noise source, followed by deployed highlift devices. With engines at a reduced power setting on the approach it is necessary to reduce airframe noise to make a significant reduction to total aircraft noise. The addition of add-on fairings is one approach for reducing the noise from the landing gear with a longer term approach to address noise generation during initial design. Airline specifications require an airliner to reach up to 90,000 take-offs and landings and roll 500,000 km on the ground in its lifetime. Conventional landing gear is designed to absorb the energy of a landing and does not perform well at reducing ground-induced vibrations in the airframe during landing ground roll, taxi and take-off. Airframe vibrations and fatigue damage can be reduced using semi-active oleos which vary damping over a wide range of ground speeds and runway quality. Accidents Malfunctions or human errors (or a combination of these) related to retractable landing gear have been the cause of numerous accidents and incidents throughout aviation history. Distraction and preoccupation during the landing sequence played a prominent role in the approximately 100 gear-up landing incidents that occurred each year in the United States between 1998 and 2003. A gear-up landing, also known as a belly landing, is an accident that results from the pilot forgetting to lower the landing gear, or being unable to do so because of a malfunction. Although rarely fatal, a gear-up landing can be very expensive if it causes extensive airframe/engine damage. For propeller-driven aircraft a prop strike may require an engine overhaul. Some aircraft have a stiffened fuselage underside or added features to minimize structural damage in a wheels-up landing. When the Cessna Skymaster was converted for a military spotting role (the O-2 Skymaster), fiberglass railings were added to the length of the fuselage; they were adequate to support the aircraft without damage if it was landed on a grassy surface. The Bombardier Dash 8 is notorious for its landing gear problems. There were three incidents involved, all of them involving Scandinavian Airlines, flights SK1209, SK2478, and SK2867. This led to Scandinavian retiring all of its Dash 8s. The cause of these incidents was a locking mechanism that failed to work properly. This also caused concern for the aircraft for many other airlines that found similar problems, Bombardier Aerospace ordered all Dash 8s with 10,000 or more hours to be grounded, it was soon found that 19 Horizon Airlines Dash 8s had locking mechanism problems, so did 8 Austrian Airlines planes, this did cause several hundred flights to be canceled. On September 21, 2005, JetBlue Airways Flight 292 successfully landed with its nose gear turned 90 degrees sideways, resulting in a shower of sparks and flame after touchdown. On November 1, 2011, LOT Polish Airlines Flight LO16 successfully belly landed at Warsaw Chopin Airport due to technical failures; all 231 people on board escaped without injury. Emergency extension systems In the event of a failure of the aircraft's landing gear extension mechanism a backup is provided. This may be an alternate hydraulic system, a hand-crank, compressed air (nitrogen), pyrotechnic or free-fall system. A free-fall or gravity drop system uses gravity to deploy the landing gear into the down and locked position. To accomplish this the pilot activates a switch or mechanical handle in the cockpit, which releases the up-lock. Gravity then pulls the landing gear down and deploys it. Once in position the landing gear is mechanically locked and safe to use for landing. Ground resonance in rotorcraft Rotorcraft with fully articulated rotors may experience a dangerous and self-perpetuating phenomenon known as ground resonance, in which the unbalanced rotor system vibrates at a frequency coinciding with the natural frequency of the airframe, causing the entire aircraft to violently shake or wobble in contact with the ground. Ground resonance occurs when shock is continuously transmitted to the turning rotors through the landing gear, causing the angles between the rotor blades to become uneven; this is typically triggered if the aircraft touches the ground with forward or lateral motion, or touches down on one corner of the landing gear due to sloping ground or the craft's flight attitude. The resulting violent oscillations may cause the rotors or other parts to catastrophically fail, detach, and/or strike other parts of the airframe; this can destroy the aircraft in seconds and critically endanger persons unless the pilot immediately initiates a takeoff or closes the throttle and reduces rotor pitch. Ground resonance was cited in 34 National Transportation Safety Board incident and accident reports in the United States between 1990 and 2008. Rotorcraft with fully articulated rotors typically have shock-absorbing landing gear designed to prevent ground resonance; however, poor landing gear maintenance and improperly inflated tires may contribute to the phenomenon. Helicopters with skid-type landing gear are less prone to ground resonance than those with wheels. Stowaways Unauthorized passengers have been known to stowaway on larger aircraft by climbing a landing gear strut and riding in the compartment meant for the wheels. There are extreme dangers to this practice, with numerous deaths reported. Dangers include a lack of oxygen at high altitude, temperatures well below freezing, crush injury or death from the gear retracting into its confined space, and falling out of the compartment during takeoff or landing. Spacecraft Launch vehicles Landing gear has traditionally not been used on the vast majority of launch vehicles, which take off vertically and are destroyed on falling back to earth. With some exceptions for suborbital vertical-landing vehicles (e.g., the Masten Xoie or Armadillo Aerospace's Lunar Lander Challenge vehicle), or for spaceplanes that use the vertical takeoff, horizontal landing (VTHL) approach (e.g., the Space Shuttle orbiter, or the USAF X-37), landing gear have been largely absent from orbital vehicles during the early decades since the advent of spaceflight technology, when orbital space transport has been the exclusive preserve of national-monopoly governmental space programs. Each spaceflight system through 2015 had relied on expendable boosters to begin each ascent to orbital velocity. Advances during the 2010s in private space transport, where new competition to governmental space initiatives has emerged, have included the explicit design of landing gear into orbital booster rockets. SpaceX has initiated and funded a multimillion-dollar reusable launch system development program to pursue this objective. As part of this program, SpaceX built, and flew eight times in 2012–2013, a first-generation test vehicle called Grasshopper with a large fixed landing gear in order to test low-altitude vehicle dynamics and control for vertical landings of a near-empty orbital first stage. A second-generation test vehicle called F9R Dev1 was built with extensible landing gear. The prototype was flown four times—with all landing attempts successful—in 2014 for low-altitude tests before being self-destructed for safety reasons on a fifth test flight due to a blocked engine sensor port. The orbital-flight version of the test vehicles–Falcon 9 and Falcon Heavy—includes a lightweight, deployable landing gear for the booster stage: a nested, telescoping piston on an A-frame. The total span of the four carbon fiber/aluminum extensible landing legs is approximately , and weigh less than ; the deployment system uses high-pressure helium as the working fluid. The first test of the extensible landing gear was successfully accomplished in April 2014 on a Falcon 9 returning from an orbital launch and was the first successful controlled ocean soft touchdown of a liquid-rocket-engine orbital booster. After a single successful booster recovery in 2015, and several in 2016, the recovery of SpaceX booster stages became routine by 2017. Landing legs had become an ordinary operational part of orbital spaceflight launch vehicles. The newest launch vehicle under development at SpaceX—the Starship—is expected to have landing legs on its first stage called Super Heavy like Falcon 9 but also has landing legs on its reusable second stage, a first for launch vehicle second stages. The first prototype of Starship—Starhopper, built in early 2019—had three fixed landing legs with replaceable shock absorbers. In order to reduce mass of the flight vehicle and the payload penalty for a reusable design, the long-term plan is for Super Heavy to land directly back at the launch site on special ground equipment that is part of the launch mount. Landers Spacecraft designed to land safely on extraterrestrial bodies such as the Moon or Mars are known as either legged landers (for example the Apollo Lunar Module) or pod landers (for example Mars Pathfinder) depending on their landing gear. Pod landers are designed to land in any orientation after which they may bounce and roll before coming to rest at which time they have to be given the correct orientation to function. The whole vehicle is enclosed in crushable material or airbags for the impacts and may have opening petals to right it. Features for landing and movement on the surface were combined in the landing gear for the Mars Science Laboratory. For landing on low-gravity bodies landing gear may include hold-down thrusters, harpoon anchors and foot-pad screws, all of which were incorporated in the design of comet-lander Philae for redundancy. In the case of Philae, however, both harpoons and the hold-down thruster failed, resulting in the craft bouncing before landing for good at a non-optimal orientation.
Technology
Aircraft components
null
382737
https://en.wikipedia.org/wiki/Rowan
Rowan
The rowans ( or ) or mountain-ashes are shrubs or trees in the genus Sorbus of the rose family, Rosaceae. They are native throughout the cool temperate regions of the Northern Hemisphere, with the highest species diversity in the Himalaya, southern Tibet and parts of western China, where numerous apomictic microspecies occur. The name rowan was originally applied to the species Sorbus aucuparia and is also used for other species in the genus Sorbus. Natural hybrids, often including S. aucuparia and the whitebeam, Aria edulis (syn. Sorbus aria), give rise to many endemic variants in the UK. Names The Latin name sorbus was borrowed into Old English as syrfe. The Latin name sorbus is from a root for "red, reddish-brown" (PIE *sor-/*ser-); English sorb is attested from the 1520s in the sense "fruit of the service tree", adopted via French sorbe from Latin sorbum "service-berry". Sorbus domestica is also known as "whitty pear", the adjective whitty meaning "pinnate". The name "mountain-ash" for Sorbus domestica is due to a superficial similarity of the rowan leaves to those of the ash, not to be confused with Fraxinus ornus, a true ash that is also known as "mountain ash". Sorbus torminalis is also known as "chequer tree"; its fruits, formerly used to flavour beer, are called "chequers", perhaps from the spotted pattern of the fruit. The traditional name rowan was applied to the species Sorbus aucuparia. The name "rowan" is recorded from 1804, detached from an earlier rowan-tree, rountree, attested from the 1540s in northern dialects of English and Scots. It is often thought to be from a North Germanic source, perhaps related to Old Norse reynir (c.f. Norwegian rogn, Danish røn, Swedish rönn), ultimately from the Germanic verb *raud-inan "to redden", in reference to the berries (as is the Latin name sorbus). Various dialectal variants of rowan are found in English, including ran, roan, rodan, royan, royne, round, and rune. The Old English name of the rowan is cwic-beám, which survives in the name quickbeam (also quicken, quicken-tree, and variants). This name by the 19th century was reinterpreted as connected to the word witch, from a dialectal variant wick for quick and names such as wicken-tree, wich-tree, wicky, and wiggan-tree, giving rise to names such as witch-hazel and witch-tree. The tree has two names in Welsh, cerdinen and criafol. Criafol may be translated as "The Lamenting Fruit", likely derived from the Welsh tradition that the Cross of Christ was carved from the wood of this tree, and the subsequent association of the Rowan's red fruit with the blood of Christ. The Old Irish name is cairtheand, reflected in Modern Irish caorthann. The "arboreal" Bríatharogam in the Book of Ballymote associates the rowan with the letter luis, with the gloss "delightful to the eye (li sula) is luis, i.e. rowan (caertheand), owing to the beauty of its berries". Due to this, "delight of the eye" (vel sim.) has been reported as a "name of the rowan" by some commentators. The most common Scots Gaelic name is caorann (), which appears in numerous Highland place names such as Beinn a' Chaorainn and Loch a’ Chaorainn. Rowan was also the clan badge of the Malcolms and McLachlans. There were strong taboos in the Highlands against the use of any parts of the tree save the berries, except for ritual purposes. For example, a Gaelic threshing tool made of rowan and called a buaitean was used on grain meant for rituals and celebrations. In the Canadian provinces of Newfoundland and Labrador and Nova Scotia, this species is commonly referred to as a "dogberry" tree. In German, Sorbus aucuparia is known as the Vogelbeerbaum ("bird-berry tree") or as Eberesche. The latter is a compound of the name of the ash tree (Esche) with what is contemporarily the name of the boar (Eber), but in fact the continuation of a Gaulish name, eburo- (also the name for a dark reddish-brown colour, cognate with Greek orphnos, Old Norse iarpr "brown"); like sorbus, eburo- seems to have referred to the colour of the berries; it is also recorded as a Gaulish name for the yew (which also has red berries), see also Eburodunum (disambiguation). Botany Rowans are mostly small deciduous trees tall, though a few are shrubs. Rowans are unrelated to the true ash trees of the genus Fraxinus, family Oleaceae. Though their leaves are superficially similar, those of Sorbus are alternate, while those of Fraxinus are opposite. Rowan leaves are arranged alternately, and are pinnate, with (7–)11–35 leaflets. A terminal leaflet is always present. The flowers are borne in dense corymbs; each flower is creamy white, and across with five petals. The fruit is a small pome diameter, bright orange or red in most species, but pink, yellow or white in some Asian species. The fruit are soft and juicy, which makes them a very good food for birds, particularly waxwings and thrushes, which then distribute the rowan seeds in their droppings. Due to their small size the fruits are often referred to as berries, but a true berry is a simple fruit produced from a single ovary, whereas a pome is an accessory fruit. Rowan is used as a food plant by the larvae of some Lepidoptera species; see Lepidoptera that feed on Sorbus. The best-known species is the European rowan Sorbus aucuparia, a small tree typically tall growing in a variety of habitats throughout northern Europe and in mountains in southern Europe and southwest Asia. Its berries are a favourite food for many birds and are a traditional wild-collected food in Britain and Scandinavia. It is one of the hardiest European trees, occurring to 71° north in Vardø Municipality in the far northern part of Arctic Norway, and has also become widely naturalised in northern North America. The greatest diversity of form as well as the largest number of rowan species is in Asia, with very distinctive species such as Sargent's rowan Sorbus sargentiana with large leaves long and broad and very large corymbs with 200–500 flowers, and at the other extreme, small-leaf rowan Sorbus microphylla with leaves long and broad. While most are trees, the dwarf rowan Sorbus reducta is a low shrub to tall. Several of the Asian species are widely cultivated as ornamental trees. North American native species in the genus Sorbus include the American mountain-ash Sorbus americana and Showy mountain-ash Sorbus decora in the east and Sitka mountain-ash Sorbus sitchensis in the west. Numerous hybrids, mostly behaving as true species reproducing by apomixis, occur between rowans and whitebeams; these are variably intermediate between their parents but generally more resemble whitebeams and are usually grouped with them (q.v.). Uses Rowans are excellent small ornamental trees for parks, gardens and wildlife areas. Several of the Asian species, such as white-fruited rowan (Sorbus oligodonta) are popular for their unusual fruit colour, and Sargent's rowan (Sorbus sargentiana) for its exceptionally large clusters of fruit. Numerous cultivars have also been selected for garden use, several of them, such as the yellow-fruited Sorbus 'Joseph Rock', of hybrid origin. They are very attractive to fruit-eating birds, which is reflected in the old name "bird catcher". The wood is dense and used for carving and turning and for tool handles and walking sticks. Rowan fruit are a traditional source of tannins for mordanting vegetable dyes. In Finland, it has been a traditional wood of choice for horse sled shafts and rake spikes. The fruit of European rowan (Sorbus aucuparia) can be made into a slightly bitter jelly which in Britain is traditionally eaten as an accompaniment to game, and into jams and other preserves either on their own or with other fruit. The fruit can also be a substitute for coffee beans, and has many uses in alcoholic beverages: to flavour liqueurs and cordials, to produce country wine, and to flavour ale. In Austria a clear rowan schnapps is distilled which is called by its German name Vogelbeerschnaps, Czechs also make a rowan liquor called jeřabinka, the Polish Jarzębiak is rowan-flavoured vodka, and the Welsh used to make a rowan wine called diodgriafel. Rowan cultivars with superior fruit for human food use are available but not common; mostly the fruits are gathered from wild trees growing on public lands. Rowan fruit contains sorbic acid, and when raw also contains parasorbic acid (about 0.4%–0.7% in the European rowan), which causes indigestion and can lead to kidney damage, but heat treatment (cooking, heat-drying etc.) and, to a lesser extent, freezing, renders it nontoxic by changing it to the benign sorbic acid. They are also usually too astringent to be palatable when raw. Collecting them after first frost (or putting in the freezer) cuts down on the bitter taste as well. Mythology and folklore Mythology In Sami mythology, the goddess Ravdna is the consort of the thunder-god Horagalles. Red berries of rowan were holy to Ravdna, and the name Ravdna resembles North Germanic words for the tree (for example, Old Norse reynir). In Norse mythology, the goddess Sif is the wife of the thunder god Thor, who has been linked with Ravdna. According to Skáldskaparmál the rowan is called "the salvation of Thor" because Thor once saved himself by clinging to it. It has been hypothesized that Sif was once conceived in the form of a rowan to which Thor clung. In the Fianna Cycle of Irish mythology, The Pursuit of Diarmuid and Grainne sees the couple eloping, trying to escape the vengeance of the legendary leader Fionn Mac Cumhaill, whom Grainne had spurned. The pair came to a forest guarded by the giant Searbhán. Searbhán allowed the pair to rest and hunt in his forest, as long as they did not eat the berries of his magical rowan tree. The pregnant Grainne desired the berries, and Diarmuid was compelled to kill Searbhán to obtain them. His mortal weapons being powerless against Searbhán, he used the giant's own iron club to kill him. The pair climbed high into the rowan tree to eat the sweetest berries, then rested in the tree afterwards. This was in violation of the advice of Aengus, the god of love, who had warned the couple that they should "not sleep in a cave with one opening, or a house with one door, or a tree with one branch, and that they would never be able to eat where they cooked, or sleep where they ate." Fionn Mac Cuimhaill tracked the couple to the rowan tree and tricked Diarmuid into revealing himself through a game of chess. Aengus spirited Grainne away and Diarmuid leapt to safety, and the pursuit continued. Folk magic The European Rowan (Sorbus aucuparia) has a long tradition in European mythology and folklore. It was thought to be a magical tree and give protection against malevolent beings. The tree was also called "wayfarer's tree" or "traveller's tree" because it supposedly prevents those on a journey from getting lost. It was said in England that this was the tree on which the Devil hanged his mother. British folklorists of the Victorian era reported the folk belief in apotropaic powers of the rowan-tree, in particular in the warding off of witches. Such a report is given by Edwin Lees (1856) for the Wyre Forest in the English West Midlands. Sir James Frazer (1890) reported such a tradition in Scotland, where the tree was often planted near a gate or front door. According to Frazer, birds' droppings often contain rowan seeds, and if such droppings land in a fork or hole where old leaves have accumulated on a larger tree, such as an oak or a maple, they may result in a rowan growing as an epiphyte on the larger tree. Such a rowan is called a "flying rowan" and was thought of as especially potent against witches and black magic, and as a counter-charm against sorcery. In 1891, Charles Godfrey Leland also reported traditions of rowan's apotropaic powers against witches in English folklore, citing the Denham Tracts (collected between 1846 and 1859). Rowan also serves as protection against fairies. For example, according to Thomas Keightley mortals could safely witness fairy rades (mounted processions held by the fairies each year at the onset of summer) by placing a rowan branch over their doors. Pagan revivalism In Neo-Druidism, the rowan is known as the "portal tree". It is considered the threshold, between this world and otherworld, or between here and wherever one may be going, for example, it was placed at the gate to a property, signifying the crossing of the threshold between the path or street and the property of someone. According to Elen Sentier, "Threshold is a place of both ingress (the way in) and egress (the way out). Rowan is a portal, threshold tree offering you the chance of 'going somewhere ... and leaving somewhere." Weather-lore In Newfoundland, popular folklore maintains that a heavy crop of fruit means a hard or difficult winter. Similarly, in Finland and Sweden, the number of fruit on the trees was used as a predictor of the snow cover during winter, but here the belief was that the rowan "will not bear a heavy load of fruit and a heavy load of snow in the same year", that is, a heavy fruit crop predicted a winter with little snow. However, as fruit production for a given summer is related to weather conditions the previous summer, with warm, dry summers increasing the amount of stored sugars available for subsequent flower and fruit production, it has no predictive relationship to the weather of the next winter. In Malax, Finland, the reverse was thought. If the rowan flowers were plentiful then the rye harvest would also be plentiful. Similarly, if the rowan flowered twice in a year there would be many potatoes and many weddings that autumn. And in Sipoo people are noted as having said that winter had begun when the waxwings (Bombycilla garrulus) had eaten the last of the rowan fruit. In Sweden, it was also thought that if the rowan trees grew pale and lost colour, the autumn and winter would bring much illness. Popular culture
Biology and health sciences
Rosales
Plants
382945
https://en.wikipedia.org/wiki/Soursop
Soursop
Soursop (also called graviola, guyabano, and in Latin America guanábana) is the fruit of Annona muricata, a broadleaf, flowering, evergreen tree. It is native to the tropical regions of the Americas and the Caribbean and is widely propagated. It is in the same genus, Annona, as cherimoya and is in the Annonaceae family. The soursop is adapted to areas of high humidity and relatively warm winters; temperatures below will cause damage to leaves and small branches, and temperatures below can be fatal. The fruit becomes dry and is no longer good for concentrate. With an aroma similar to pineapple, the flavor of the fruit has been described as a combination of strawberries and apple with sour citrus flavor notes, contrasting with an underlying thick creamy texture reminiscent of banana. Soursop is widely promoted (sometimes as graviola) as an alternative cancer treatment, but there is no reliable medical evidence that it is effective for treating cancer or any disease. Annona muricata Annona muricata is a species of the genus Annona of the custard apple tree family, Annonaceae, which has edible fruit. The fruit is usually called soursop due to its slightly acidic taste when ripe. Annona muricata is native to the Caribbean and Central America but is now widely cultivated – and in some areas, becoming invasive – in tropical and subtropical climates throughout the world, such as India. The A. muricata fruit is generally called guanábana in Hispanic America, and the tree is a guanábano. Annona muricata is also the main host plant for tailed jay (Graphium agamemnon) caterpillars. They eat the leaves voraciously and usually stick under the leaves to pupate. Botanical description Annona muricata is a small, upright, evergreen tree that can grow to about tall. Its young branches are hairy. Leaves are oblong to oval, to long and to wide. They are a glossy dark green with no hairs above, and paler and minutely hairy to no hairs below. The leaf stalks are to long and without hairs. Flower stalks (peduncles) are to long and woody. They appear opposite from the leaves or as an extra from near the leaf stalk, each with one or two flowers, occasionally a third. Stalks for the individual flowers (pedicels) are stout and woody, minutely hairy to hairless and to with small bractlets nearer to the base which are densely hairy. The petals are thick and yellowish. Outer petals meet at the edges without overlapping and are broadly ovate, to by to , tapering to a point with a heart shaped base. They are evenly thick, and are covered with long, slender, soft hairs externally and matted finely with soft hairs within. Inner petals are oval shaped and overlap. They measure roughly to by , and are sharply angled and tapering at the base. Margins are comparatively thin, with fine matted soft hairs on both sides. The receptacle is conical and hairy. The stamens are long and narrowly wedge-shaped. The connective-tip terminate abruptly and anther hollows are unequal. Sepals are quite thick and do not overlap. Carpels are linear and basally growing from one base. The ovaries are covered with dense reddish brown hairs, 1-ovuled, style short and stigma truncate. Its pollen is shed as permanent tetrads. The fruits are oval, dark green when immature, with a leathery, inedible skin that turns yellow-green during maturity. They can be up to long, ( individuals up to fifteen inches (35 centimeters) have been reported)with a moderately firm texture, and may weigh . Their flesh is juicy, acidic, whitish, and aromatic somewhat like pineapple, although with a unique earthy aroma. Most of the immature segments are seedless, whereas mature fruit may contain as many as 200 seeds. Distribution Annona muricata is tolerant of poor soil and prefers lowland areas between the altitudes of 0 to . It cannot stand frost. The exact origin is unknown; it is native to the tropical regions of the Americas and is widely propagated. It is an introduced species on all temperate continents, especially in subtropical regions. Cultivation The plant is grown for its long, prickly, green fruit, which can have a mass of up to , making it probably the second biggest annona after the junglesop. Away from its native area, some limited production occurs as far north as southern Florida within USDA Zone 10; however, these are mostly garden plantings for local consumption. It is also grown in parts of China and Southeast Asia and is abundant on the Island of Mauritius. The main suppliers of the fruit are Mexico followed by Peru, Brazil, Ecuador, Guatemala, and Haiti. To aid soursop breeders and stimulate further development of genomic resources for this globally important plant family, the complete genome for Annona muricata was sequenced in 2021. Uses Culinary The flesh of the fruit consists of an edible, white pulp, some fiber, and a core of indigestible black seeds. The pulp is also used to make fruit nectar, smoothies, fruit juice drinks, as well as candies, sorbets, and ice cream flavorings. Due to the fruit's widespread cultivation, its derivative products are consumed in many countries, such as Jamaica, Mexico, Brazil, Venezuela, Colombia, and Fiji. The seeds are normally left in the preparation, and removed while consuming, unless a blender is used for processing. Soursop is also a common ingredient for making fresh fruit juices that are sold by street food vendors. In Indonesia, the fruit is commonly called sirsak and sometimes made into dodol sirsak, a sweet which is made by boiling the soursop pulp in water and adding sugar until the mixture caramelizes and hardens. In the Philippines, it is called guyabano, derived from the Spanish guanábana, and is eaten ripe, or used to make juices, smoothies, or ice cream. Sometimes, the leaf is used in tenderizing meat. In Vietnam, this fruit is called mãng cầu Xiêm (Siamese soursop) in the south, or mãng cầu (soursop) in the north, and is used to make smoothies, or eaten as is. In Cambodia, this fruit is called tearb barung, literally "western custard-apple fruit". In Malaysia, it is known in Malay as durian belanda ("Dutch durian") and in East Malaysia, specifically among the Dusun people of Sabah, it is locally known as lampun. Popularly, it is eaten raw when it ripens, or used as one of the ingredients in ais kacang or ais batu campur. Usually the fruits are taken from the tree when they mature and left to ripen in a dark corner, whereafter they will be eaten when they are fully ripe. It has a white flower with a very pleasing scent, especially in the morning. While for people in Brunei Darussalam this fruit is popularly known as "durian salat", widely available and easily planted. Soursop leaves are sold and consumed in Indonesia as herbal medicine. The leaves are usually boiled to make tea. Subspecies as synonyms Annona muricata var. borinquensis Nutrition Raw soursop is 81% water, 17% carbohydrates, 1% protein, and has negligible fat (see table). In a reference amount of , the raw fruit supplies of food energy, and contains only vitamin C as a significant amount (23%) of the Daily Value, with no other micronutrients in appreciable amounts (table). Phytochemicals The compound annonacin is contained in the fruit, seeds, and leaves of soursop. The leaves of Annona muricata contain annonamine, which is an aporphine-class alkaloid containing a quaternary ammonium group. The plant also contains lichexanthone, a compound in the xanthone class. Potential neurotoxicity The Memorial Sloan-Kettering Cancer Center cautions, "alkaloids extracted from graviola may cause neuronal dysfunction". Annonacin has been shown in laboratory research to be neurotoxic. In 2010, the French food safety agency, Agence française de sécurité sanitaire des produits de santé, concluded that "it is not possible to confirm that the observed cases of atypical Parkinson syndrome ... are linked to the consumption of Annona muricata". False cancer treatment claims In 2008, the Federal Trade Commission in the United States stated that use of soursop to treat cancer was "bogus", and there was "no credible scientific evidence" that the extract of soursop sold by Bioque Technologies "can prevent, cure, or treat cancer of any kind." Also in 2008, a UK court case relating to the sale of Triamazon, a soursop product, resulted in the criminal conviction of a man under the terms of the UK Cancer Act for offering to treat people for cancer. A spokesman for the council that instigated the action stated, "it is as important now as it ever was that people are protected from those peddling unproven products with spurious claims as to their effects." The Memorial Sloan-Kettering Cancer Center and Cancer Research UK state that cancer treatment using soursop is not supported by reliable clinical evidence. According to Cancer Research UK, "Many sites on the internet advertise and promote graviola capsules as a cancer cure, but none of them are supported by any reputable scientific cancer organizations" and "there is no evidence to show that graviola works as a cure for cancer".
Biology and health sciences
Tropical and tropical-like fruit
Plants
383038
https://en.wikipedia.org/wiki/Carpool
Carpool
Carpooling is the sharing of car journeys so that more than one person travels in a car, and prevents the need for others to have to drive to a location themselves. Carpooling is considered a Demand-Responsive Transport (DRT) service. By having more people using one vehicle, carpooling reduces each person's travel costs such as: fuel costs, tolls, and the stress of driving. Carpooling is also a more environmentally friendly and sustainable way to travel as sharing journeys reduces air pollution, carbon emissions, traffic congestion on the roads, and the need for parking spaces. Authorities often encourage carpooling, especially during periods of high pollution or high fuel prices. Car sharing is a good way to use up the full seating capacity of a car, which would otherwise remain unused if it were just the driver using the car. In 2009, carpooling represented 43.5% of all trips in the United States and 10% of commute trips. The majority of carpool commutes (over 60%) are "fam-pools" with family members. Carpool commuting is more popular for people who work in places with more jobs nearby, and who live in places with higher residential densities. Carpooling is significantly correlated with transport operating costs, including fuel prices and commute length, and with measures of social capital, such as time spent with others, time spent eating and drinking and being unmarried. However, carpooling is significantly less among people who spend more time at work, elderly people, and homeowners. Operation Drivers and passengers offer and search for journeys through one of the several mediums available. After finding a match they contact each other to arrange any details for the journey(s). Costs, meeting points and other details like space for luggage are agreed on. They then meet and carry out their shared car journey(s) as planned. Carpooling is commonly implemented for commuting but is increasingly popular for longer one-off journeys, with the formality and regularity of arrangements varying between schemes and journeys. Carpooling is not always arranged for the whole length of a journey. Especially on long journeys, it is common for passengers to only join for parts of the journey, and give a contribution based on the distance that they travel. This gives carpooling extra flexibility and enables more people to share journeys and save money. Some carpooling is now organized in online marketplaces or ride-matching websites that allow drivers and passengers to find a travel match and/or make a secured transaction to share the planned travel cost. Like other online marketplaces, they use community-based trust mechanisms, such as user-ratings, to create an optimal experience for users. Arrangements for carpooling can be made through many different media including public websites, social media, acting as marketplaces, employer websites, smartphone applications, carpooling agencies and pick-up points. Initiatives Many companies and local authorities have introduced programs to promote carpooling. In an effort to reduce traffic and encourage carpooling, some governments have introduced high-occupancy vehicle (HOV) lanes in which only vehicles with two or more passengers are allowed to drive. HOV lanes can create strong practical incentives for carpooling by reducing travel time and expense. In some countries, it is common to find parking spaces reserved for carpoolers. In 2011, an organization called Greenxc created a campaign to encourage others to use this form of transportation in order to reduce their own carbon footprint. Carpooling, or car sharing as it is called in British English, is promoted by a national UK charity, Carplus, whose mission is to promote responsible car use in order to alleviate financial, environmental and social costs of motoring today, and encourage new approaches to car dependency in the UK. Carplus is supported by Transport for London, the British government initiative to reduce congestion and parking pressure and contribute to relieving the burden on the environment and to the reduction of traffic-related air-pollution, in London. However, not all countries are helping carpooling to spread: in Hungary it is a tax crime to carry someone in a car for a cost share (or any payment) unless the driver has a taxi license and there is an invoice issued and taxes are paid. Several people were fined by undercover tax officers during a 2011 crackdown, posing as passengers looking for a ride on carpooling websites. On 19 March 2012 Endre Spaller, a member of the Hungarian Parliament interpellated Zoltán Cséfalvay, Secretary of State for the National Economy, about this practice who replied that carpooling should be endorsed instead of punished, however care must be taken for some people trying to turn it into a way to gain untaxed profit. Cost sharing Carpooling usually means to divide the travel expenses equally between all the occupants of the vehicle (driver or passenger). The driver does not try to earn money, but to share with several people the cost of a trip he/she would do anyway. The expenses to be divided basically include the fuel and possible tolls. But if we include in the calculation the depreciation of the vehicle purchase and maintenance, insurance and taxes paid by the driver, we get a cost around $1/mile. There are platforms that facilitate carpooling by connecting people seeking respectively passengers and drivers. Usually there is a fare set up by the car driver and accepted by passengers because they get an agreement before trip start. The second generation of these platforms is designed to manage urban trips in real time, using the traveler's smartphones. They make possible to occupy the vehicle's empty seats on the fly, collecting and delivering passengers along its entire route (and not only at common points of origin and destination). This system automatically performs an equitable sharing of travel costs, allowing each passenger to reimburse the driver a fair share according to the benefit actually gained by the vehicle usage, proportional to the distance traveled by the passenger and the number of people that shared the car. History Carpooling first became prominent in the United States as a rationing tactic during World War II. Ridesharing began during World War II through "car clubs" or "car-sharing clubs". The US Office of Civilian Defense asked neighborhood councils to encourage four workers to share a ride in one car to conserve rubber for the war effort. It also created a ride sharing program called the Car Sharing Club Exchange and Self-Dispatching System. Carpooling returned in the mid-1970s due to the 1973 oil crisis and the 1979 energy crisis. At that time the first employee vanpools were organized at Chrysler and 3M. Carpooling declined precipitously between the 1970s and the 2000s, peaking in the US in 1970 with a commute mode share of 20.4%. By 2011 it was down to 9.7%. In large part this has been attributed to the dramatic fall in gas prices (45%) during the 1980s. In the 1990s it was popular among college students, where campuses have limited parking space. Together with Prof. James Davidson from Harvard, Dace Campbell, Ivan Lin and Habib Rached from Washington, and others, began to investigate the feasibility of further development although the comprehensive technologies were not commercially available yet at the time. Their work is considered by many to be a forerunner of carpooling & ridesharing systems technology used by Garrett Camp, Travis Kalanick, Oscar Salazar and Conrad Whelan at Uber. The character of carpool travel has been shifting from "Dagwood Bumstead" variety, in which each rider is picked up in sequence, to a "park and ride" variety, where all the travelers meet at a common location. Recently, however, the Internet has facilitated growth for carpooling and the commute share mode has grown to 10.7% in 2005. In 2007 with the advent of smart phones and GPS, which became commercially available, John Zimmer and Logan Green, from Cornell University and University of California, Santa Barbara respectively, rediscovered and created carpooling system called Zimride, a precursor to Lyft. The popularity of the Internet and smart phones has greatly helped carpooling to expand, enabling people to offer and find rides thanks to easy-to-use and reliable online transport marketplaces. These websites are commonly used for one-off long-distance journeys with high fuel costs. In Europe, long-distance car-pooling has become increasingly popular over the past years, thanks to BlaBlaCar. According to its website, , Blablacar counted more than 80 million users, across Europe and beyond. , Uber and Lyft have suspended carpooling services in the U.S. and Canada in efforts to control the COVID-19 pandemic via social distancing. Other forms Carpooling exists in other forms: Slugging is a form of ad hoc, informal carpooling between strangers. No money changes hands, but a mutual benefit still exists between the driver and passenger(s) making the practice worthwhile. Flexible carpooling expands the idea of ad hoc carpooling by designating formal locations for travelers to join carpools. Ridesharing companies allow people to arrange ad hoc rides on very short notice, through the use of smartphone applications or the internet. Passengers are simply picked up at their current location. Challenges Flexibility - Carpooling can struggle to be flexible enough to accommodate in route stops or changes to working times/patterns. One survey identified this as the most common reason for not carpooling. To counter this some schemes offer 'sweeper services' with later running options, or a 'guaranteed ride home' arrangement with a local taxi company. Reliability - If a carpooling network lacks a "critical mass" of participants, it may be difficult to find a match for certain trips. The parties may not necessarily follow through on the agreed-upon ride. Several internet carpooling marketplaces are addressing this concern by implementing online paid passenger reservation, billed even if passengers do not turn up. Riding with strangers - Concerns over security have been an obstacle to sharing a vehicle with strangers, though in reality the risk of crime is small. One remedy used by internet carpooling schemes is reputation systems that flag problematic users and allow responsible users to build up trust capital, such systems greatly increase the value of the website for the user community. Overall efficacy - Though carpooling is officially sanctioned by most governments, including construction of lanes specifically allocated for car-pooling, some doubts remain as to the overall efficacy of carpool lanes. As an example, many car-pool lanes, or lanes restricted to car-pools during peak traffic hours, are seldom occupied by car-pools in the traditional sense. Instead, these lanes are often empty, leading to an overall net increase in fuel consumption as freeway capacity is possibly contracted, forcing the solo-occupied cars to travel slower, leading to reduced fuel efficiency. In 2012, the Queensland government announced it would end carpool lanes (known as Transit Lanes) claiming they were creating congestion and delays. The move was supported by the RACQ motoring group. No carpooling service provides the ability for drivers to declare the time range during which they provide services in advance. Although the majority of carpooling services use a mobile application, this is not the case for interurban carpooling services (i.e., Ride joy and Autostrade carpooling) . In addition, no carpooling was found to guarantee a minimum delay for drivers or a single dropoff/pickup point. Some carpooling platforms (i.e., TwoGo and BlaBlaLines operated by BlablaCar) use an intelligent technology to analyze rides from all users to find the best fit for each user. This intelligent technology even factors in real-time traffic data to calculate precise routes and arrival times. In popular culture In the 1970s, the US Department of Transportation released a humorous, animated public service announcement to promote carpooling entitled "Kalaka." In the commercial, an interviewer is shown talking to Noah, "the original share-the-ride-with-a-friend man." Noah explains that carpooling is an economical way to get where you're going, but back in his time it was known as "kalaka." Cabbing All the Way is a book written by author Jatin Kuberkar that narrates a success story of a carpool with twelve people on board. Based in the city of Hyderabad, India, the book is a real life narration and highlights the potential benefits of having a carpool. The 2017 smartphone game Crazy Taxi Tycoon (formerly titled Crazy Taxi Gazillionaire) antagonizes ride-sharing as a threat to taxi business, as it becomes a powerful megacorporation that rips off those whom it serves. The player is tasked in hiring taxi drivers to establish a taxi service that offers a more legitimate, friendly and reliable transport experience. Carpool Karaoke, best known as a recurring segment from The Late Late Show with James Corden.
Technology
Motorized road transport
null
383186
https://en.wikipedia.org/wiki/Tile
Tile
Tiles are usually thin, square or rectangular coverings manufactured from hard-wearing material such as ceramic, stone, metal, baked clay, or even glass. They are generally fixed in place in an array to cover roofs, floors, walls, edges, or other objects such as tabletops. Alternatively, tile can sometimes refer to similar units made from lightweight materials such as perlite, wood, and mineral wool, typically used for wall and ceiling applications. In another sense, a tile is a construction tile or similar object, such as rectangular counters used in playing games (see tile-based game). The word is derived from the French word tuile, which is, in turn, from the Latin word tegula, meaning a roof tile composed of fired clay. Tiles are often used to form wall and floor coverings, and can range from simple square tiles to complex or mosaics. Tiles are most often made of ceramic, typically glazed for internal uses and unglazed for roofing, but other materials are also commonly used, such as glass, cork, concrete and other composite materials, and stone. Tiling stone is typically marble, onyx, granite or slate. Thinner tiles can be used on walls than on floors, which require more durable surfaces that will resist impacts. Global production of ceramic tiles, excluding roof tiles, was estimated to be 12.7 billion m2 in 2019. Decorative tile work and colored brick Decorative tilework or tile art should be distinguished from mosaic, where forms are made of great numbers of tiny irregularly positioned tesserae, each of a single color, usually of glass or sometimes ceramic or stone. There are various tile patterns, such as herringbone, staggered, offset, grid, stacked, pinwheel, parquet de Versailles, basket weave, tiles Art, diagonal, chevron, and encaustic which can range in size, shape, thickness, and color. History There are several other types of traditional tiles that remain in manufacture, for example the small, almost mosaic, brightly colored zellij tiles of Morocco and the surrounding countries. Ancient Middle East The earliest evidence of glazed brick is the discovery of glazed bricks in the Elamite Temple at Chogha Zanbil, dated to the 13th century BC. Glazed and colored bricks were used to make low reliefs in Ancient Mesopotamia, most famously the Ishtar Gate of Babylon (), now partly reconstructed in Berlin, with sections elsewhere. Mesopotamian craftsmen were imported for the palaces of the Persian Empire such as Persepolis. The use of sun-dried bricks or adobe was the main method of building in Mesopotamia where river mud was found in abundance along the Tigris and Euphrates. Here the scarcity of stone may have been an incentive to develop the technology of making kiln-fired bricks to use as an alternative. To strengthen walls made from sun-dried bricks, fired bricks began to be used as an outer protective skin for more important buildings like temples, palaces, city walls, and gates. Making fired bricks is an advanced pottery technique. Fired bricks are solid masses of clay heated in kilns to temperatures of between 950° and 1,150°C, and a well-made fired brick is an extremely durable object. Like sun-dried bricks, they were made in wooden molds but for bricks with relief decorations, special molds had to be made. Ancient Indian subcontinent Rooms with tiled floors made of clay decorated with geometric circular patterns have been discovered from the ancient remains of Kalibangan, Balakot and Ahladino Tiling was used in the second century by the Sinhalese kings of ancient Sri Lanka, using smoothed and polished stone laid on floors and in swimming pools. The techniques and tools for tiling is advanced, evidenced by the fine workmanship and close fit of the tiles. Such tiling can be seen in Ruwanwelisaya and Kuttam Pokuna in the city of Anuradhapura. The nine-storied Lovamahapaya (3rd century BCE) had copper roof tiles. The roofs were tiled, with red, white, yellow, turquoise and brown tiles. There were also tiles made of bronze. Sigiriya also had an elaborate gatehouse made of timber and brick masonry with multiple tiled roofs. The massive timber doorposts remaining today indicate this. Ancient Iran The Achaemenid Empire decorated buildings with glazed brick tiles, including Darius the Great's palace at Susa, and buildings at Persepolis. The succeeding Sassanid Empire used tiles patterned with geometric designs, flowers, plants, birds and human beings, glazed up to a centimeter thick. Islamic Early Islamic mosaics in Iran consist mainly of geometric decorations in mosques and mausoleums, made of glazed brick. Typical, turquoise, tiling becomes popular in 10th-11th century and is used mostly for Kufic inscriptions on mosque walls. Seyyed Mosque in Isfahan (AD 1122), Dome of Maraqeh (AD 1147) and the Jame Mosque of Gonabad (1212 AD) are among the finest examples. The dome of Jame' Atiq Mosque of Qazvin is also dated to this period. The golden age of Persian tilework began during the Timurid Empire. In the moraq technique, single-color tiles were cut into small geometric pieces and assembled by pouring liquid plaster between them. After hardening, these panels were assembled on the walls of buildings. But the mosaic was not limited to flat areas. Tiles were used to cover both the interior and exterior surfaces of domes. Prominent Timurid examples of this technique include the Jame Mosque of Yazd (AD 1324–1365), Goharshad Mosque (AD 1418), the Madrassa of Khan in Shiraz (AD 1615), and the Molana Mosque (AD 1444). Other important tile techniques of this time include girih tiles, with their characteristic white girih, or straps. Mihrabs, being the focal points of mosques, were usually the places where most sophisticated tilework was placed. The 14th-century mihrab at Madrasa Imami in Isfahan is an outstanding example of aesthetic union between the Islamic calligrapher's art and abstract ornament. The pointed arch, framing the mihrab's niche, bears an inscription in Kufic script used in 9th-century Qur'an. One of the best known architectural masterpieces of Iran is the Shah Mosque in Isfahan, from the 17th century. Its dome is a prime example of tile mosaic and its winter praying hall houses one of the finest ensembles of cuerda seca tiles in the world. A wide variety of tiles had to be manufactured in order to cover complex forms of the hall with consistent mosaic patterns. The result was a technological triumph as well as a dazzling display of abstract ornament. During the Safavid period, mosaic ornaments were often replaced by a haft rang (seven colors) technique. Pictures were painted on plain rectangle tiles, glazed and fired afterwards. Besides economic reasons, the seven colors method gave more freedom to artists and was less time-consuming. It was popular until the Qajar period, when the palette of colors was extended by yellow and orange. The seven colors of Haft Rang tiles were usually black, white, ultramarine, turquoise, red, yellow and fawn. The Persianate tradition continued and spread to much of the Islamic world, notably the İznik pottery of Turkey under the Ottoman Empire in the 16th and 17th centuries. Palaces, public buildings, mosques and türbe mausoleums were heavily decorated with large brightly colored patterns, typically with floral motifs, and friezes of astonishing complexity, including floral motifs and calligraphy as well as geometric patterns. Islamic buildings in Bukhara in central Asia (16th-17th century) also exhibit very sophisticated floral ornaments. In South Asia monuments and shrines adorned with Kashi tile work from Persia became a distinct feature of the shrines of Multan and Sindh. The Wazir Khan Mosque in Lahore stands out as one of the masterpieces of Kashi time work from the Mughal period. The zellige tradition of Arabic North Africa uses small colored tiles of various shapes to make very complex geometric patterns. It is halfway to mosaic, but as the different shapes must be fitted precisely together, it falls under tiling. The use of small coloured glass fields also make it rather like enamelling, but with ceramic rather than metal as the support. Europe Medieval Europe made considerable use of painted tiles, sometimes producing very elaborate schemes, of which few have survived. Religious and secular stories were depicted. The imaginary tiles with Old Testament scenes shown on the floor in Jan van Eyck's 1434 Annunciation in Washington are an example. The 14th century "Tring tiles" in the British Museum show childhood scenes from the Life of Christ, possibly for a wall rather than a floor, while their 13th century "Chertsey Tiles", though from an abbey, show scenes of Richard the Lionheart battling with Saladin in very high-quality work. Medieval letter tiles were used to create Christian inscriptions on church floors. Medieval influences between Middle Eastern tilework and tilework in Europe were mainly through Islamic Iberia and the Byzantine and Ottoman Empires. The Alhambra zellige are said to have inspired the tessellations of M. C. Escher. Medieval encaustic tiles were made of multiple colours of clay, shaped and baked together to form a pattern that, rather than sitting on the surface, ran right through the thickness of the tile, and thus would not wear away. Azulejos are derived from zellij, and the name is likewise derived. The term is both a simple Portuguese and Spanish term for zellige, and a term for later tilework following the tradition. Some azujelos are small-scale geometric patterns or vegetative motifs, some are blue monochrome and highly pictorial, and some are neither. The Baroque period produced extremely large painted scenes on tiles, usually in blue and white, for walls. Azulejos were also used in Latin American architecture. Delftware wall tiles, typically with a painted design covering only one (rather small) blue and white tile, were ubiquitous in Holland and widely exported over Northern Europe from the 16th century on, replacing many local industries. Several 18th century royal palaces had porcelain rooms with the walls entirely covered in porcelain in tiles or panels. Surviving examples include ones at Capodimonte, Naples, the Royal Palace of Madrid and the nearby Royal Palace of Aranjuez. The Victorian period saw a great revival in tilework, largely as part of the Gothic Revival, but also the Arts and Crafts Movement. Patterned tiles, or tiles making up patterns, were now mass-produced by machine and reliably level for floors and cheap to produce, especially for churches, schools and public buildings, but also for domestic hallways and bathrooms. For many uses the tougher encaustic tile was used. Wall tiles in various styles also revived; the rise of the bathroom contributing greatly to this, as well as greater appreciation of the benefit of hygiene in kitchens. William De Morgan was the leading English designer working in tiles, strongly influenced by Islamic designs. Since the Victorian period tiles have remained standard for kitchens and bathrooms, and many types of public area. Panot is a type of outdoor cement tile and the associated paving style, both found in Barcelona. In 2010, around of Barcelona streets were panot-tiled. Portugal and São Luís continue their tradition of azulejo tilework today, with tiles used to decorate buildings, ships, and even rocks. Far East With exceptions, notably the Porcelain Tower of Nanjing, decorated tiles or glazed bricks do not feature largely in East Asian ceramics. Philippines In 17th CE during the colonialization of Spain in the Philippines, they introduced the Baldozas Mosaicos to describe the Mediterranean cement tiles, but they are now more commonly referred to as Machuca tiles during the 19th CE, named after Don Pepe, the son of the renowned producer of Baldozas Mosaicos in the Philippines, Don Jose Machuca by Romero Roof tiles Roof tiles are overlapping tiles designed mainly to keep out precipitation such as rain or snow, and are traditionally made from locally available materials such as clay or slate. Later tiles have been made from materials such as concrete, and plastic. Roof tiles can be affixed by screws or nails, but in some cases historic designs such as Marseilles tiles utilize interlocking systems that can be self-supporting. Tiles typically cover an underlayment system, which seals the roof against water intrusion. Clay roof tiles historically gained their color purely from the clay that they were composed of, resulting in largely red, orange, and tan colored roofs. Over time some cultures, notably in Asia, began to apply glazes to clay tiles, achieving a wide variety of colors and combinations. Modern clay roof tiles typically source their color from kiln firing conditions, the application of glaze, or the use of a ceramic engobe. Contrary to popular belief a glaze does not weatherproof a tile, the porosity of the clay body is what determines how well a tile will survive harsh weather conditions. Floor tiles These are commonly made of ceramic or stone, although recent technological advances have resulted in rubber or glass tiles for floors as well. Ceramic tiles may be painted and glazed. Small mosaic tiles may be laid in various patterns. Floor tiles are typically set into mortar consisting of sand, Portland cement and often a latex additive. The spaces between the tiles are commonly filled with sanded or unsanded floor grout, but traditionally mortar was used. Natural stone tiles can be beautiful but as a natural product they are less uniform in color and pattern, and require more planning for use and installation. Mass-produced stone tiles are uniform in width and length. Granite or marble tiles are sawn on both sides and then polished or finished on the top surface so that they have a uniform thickness. Other natural stone tiles such as slate are typically "riven" (split) on the top surface so that the thickness of the tile varies slightly from one spot on the tile to another and from one tile to another. Variations in tile thickness can be handled by adjusting the amount of mortar under each part of the tile, by using wide grout lines that "ramp" between different thicknesses, or by using a cold chisel to knock off high spots. Some stone tiles such as polished granite, marble, and travertine are very slippery when wet. Stone tiles with a riven surface such as slate or with a sawn and then sandblasted or honed surface will be more slip-resistant. Ceramic tiles for use in wet areas can be made more slip-resistant by using very small tiles so that the grout lines acts as grooves, by imprinting a contour pattern onto the face of the tile, or by adding a non-slip material, such as sand, to the glazed surface. The hardness of natural stone tiles varies such that some of the softer stone (e.g. limestone) tiles are not suitable for very heavy-traffic floor areas. On the other hand, ceramic tiles typically have a glazed upper surface and when that becomes scratched or pitted the floor looks worn, whereas the same amount of wear on natural stone tiles will not show, or will be less noticeable. Natural stone tiles can be stained by spilled liquids; they must be sealed and periodically resealed with a sealant in contrast to ceramic tiles which only need their grout lines sealed. However, because of the complex, nonrepeating patterns in natural stone, small amounts of dirt on many natural stone floor tiles do not show. The tendency of floor tiles to stain depends not only on a sealant being applied, and periodically reapplied, but also on their porosity or how porous the stone is. Slate is an example of a less porous stone while limestone is an example of a more porous stone. Different granites and marbles have different porosities with the less porous ones being more valued and more expensive. Most vendors of stone tiles emphasize that there will be variation in color and pattern from one batch of tiles to another of the same description and variation within the same batch. Stone floor tiles tend to be heavier than ceramic tiles and somewhat more prone to breakage during shipment. Rubber floor tiles have a variety of uses, both in residential and commercial settings. They are especially useful in situations where it is desired to have high-traction floors or protection for an easily breakable floor. Some common uses include flooring of garage, workshops, patios, swimming pool decks, sport courts, gyms, and dance floors. Plastic floor tiles including interlocking floor tiles that can be installed without adhesive or glue are a recent innovation and are suitable for areas subject to heavy traffic, wet areas and floors that are subject to movement, damp or contamination from oil, grease or other substances that may prevent adhesion to the substrate. Common uses include old factory floors, garages, gyms and sports complexes, schools and shops. Ceiling tiles Ceiling tiles are lightweight tiles used inside buildings. They are placed in an aluminium grid; they provide little thermal insulation but are generally designed either to improve the acoustics of a room or to reduce the volume of air being heated or cooled. Mineral fiber tiles are fabricated from a range of products; wet felt tiles can be manufactured from perlite, mineral wool, and fibers from recycled paper; stone wool tiles are created by combining molten stone and binders which is then spun to create the tile; gypsum tiles are based on the soft mineral and then finished with vinyl, paper or a decorative face. Ceiling tiles very often have patterns on the front face; these are there in most circumstances to aid with the tiles ability to improve acoustics. Ceiling tiles also provide a barrier to the spread of smoke and fire. Breaking, displacing, or removing ceiling tiles enables hot gases and smoke from a fire to rise and accumulate above detectors and sprinklers. Doing so delays their activation, enabling fires to grow more rapidly. Ceiling tiles, especially in old Mediterranean houses, were made of terracotta and were placed on top of the wooden ceiling beams and upon those were placed the roof tiles. They were then plastered or painted, but nowadays are usually left bare for decorative purposes. Modern-day tile ceilings may be flush mounted (nail up or glue up) or installed as dropped ceilings. Materials and processes Ceramic Ceramic materials for tiles include earthenware, stoneware and porcelain. Terracotta is a traditional material used for roof tiles. Porcelain tiles This is a US term, and defined in ASTM standard C242 as a ceramic mosaic tile or paver that is generally made by dust-pressing and of a composition yielding a tile that is dense, fine-grained, and smooth, with sharply-formed face, usually impervious. The colours of such tiles are generally clear and bright. The ISO 13006 defines a "porcelain tile" as a "fully vitrified tile with water absorption less than or equal to 0.5%, belonging to groups AIa and BIa (of ISO 13006).". The ANSI defines as "a ceramic tile that has 'a water absorption of 0.5%' or less.” It is made generally by the pressed or extruded method." Pebble Similar to mosaics or other patterned tiles, pebble tiles are tiles made up of small pebbles attached to a backing. The tile is generally designed in an interlocking pattern so that final installations fit of multiple tiles fit together to have a seamless appearance. A relatively new tile design, pebble tiles were originally developed in Indonesia using pebbles found in various locations in the country. Today, pebble tiles feature all types of stones and pebbles from around the world. Digital printed Printing techniques and digital manipulation of art and photography are used in what is known as "custom tile printing". Dye sublimation printers, inkjet printers and ceramic inks and toners permit printing on a variety of tile types yielding photographic-quality reproduction. Using digital image capture via scanning or digital cameras, bitmap/raster images can be prepared in photo editing software programs. Specialized custom-tile printing techniques permit transfer under heat and pressure or the use of high temperature kilns to fuse the picture to the tile substrate. This has become a method of producing custom tile murals for kitchens, showers, and commercial decoration in restaurants, hotels, and corporate lobbies. Recent technology applied to Digital ceramic and porcelain printers allow images to be printed with a wider color gamut and greater color stability even when fired in a kiln up to 2200 °F Diamond etched A method for custom tile printing involving a diamond-tipped drill controlled by a computer. Compared with the laser engravings, diamond etching is in almost every circumstance more permanent. Mathematics of tiling Certain shapes of tiles, most obviously rectangles, can be replicated to cover a surface with no gaps. These shapes are said to tessellate (from the Latin tessella, 'tile') and such a tiling is called a tessellation. Geometric patterns of some Islamic polychrome decorative tilings are rather complicated (see Islamic geometric patterns and, in particular, Girih tiles), even up to supposedly quaziperiodic ones, similar to Penrose tilings.
Technology
Building materials
null
383705
https://en.wikipedia.org/wiki/D-subminiature
D-subminiature
The D-subminiature or D-sub is a common type of electrical connector. They are named for their characteristic D-shaped metal shield. When they were introduced, D-subs were among the smallest connectors used on computer systems. Description, nomenclature, and variants A D-sub contains two or more parallel rows of pins or sockets usually surrounded by a D-shaped metal shield, or shell, that provides mechanical support, ensures correct orientation, and may screen against electromagnetic interference. Calling that shield a shell (or D-shell) can be ambiguous, as the term shell is also short for the cable shell, or backshell. D-sub connectors have gender: parts with pin contacts are called male connectors or plugs, while those with socket contacts are called female connectors or sockets. The socket's shield fits tightly inside the plug's shield. Panel-mounted connectors usually have #4-40 UNC (as designated with the Unified Thread Standard) jackscrews that accept screws on the cable end connector cover that are used for locking the connectors together and offering mechanical strain relief, and can be tightened with a 3/16" (or 5mm) hex socket. The hexagonal standoffs (4-40 bolts) at both sides of each connector have a threaded stud fastening the connectors to the metal panel. They also have threaded sockets to receive jackscrews on the cable shell, holding the plug and socket together. Occasionally the nuts may be found on a cable end connector if it is expected to connect to another cable end (see the male DE-9 pictured). When screened cables are used, the shields are connected to the overall screens of the cables. This creates an electrically continuous screen covering the whole cable and connector system. The D-sub series of connectors was introduced by Cannon in 1952. Cannon's part-numbering system uses D as the prefix for the whole series, followed by one of A, B, C, D, or E denoting the shell size, followed by the number of pins or sockets, followed by either P (plug or pins) or S (socket) denoting the gender of the part. Each shell size usually (see below for exceptions) corresponds to a certain number of pins or sockets: A with 15, B with 25, C with 37, D with 50, and E with 9. For example, DB-25 denotes a D-sub with a 25-position shell size and a 25-position contact configuration. The contacts in each row of these connectors are spaced 326/3000 of an inch apart, or approximately , and the rows are spaced apart; the pins in the two rows are offset by half the distance between adjacent contacts in a row. This spacing is called normal density. The suffixes M and F (for male and female) are sometimes used instead of the original P and S for plug and socket. Variants Later D-sub connectors added extra pins to the original shell sizes, and their names follow the same pattern. For example, the DE-15, usually found in VGA cables, has 15 pins in three rows, all surrounded by an E size shell. The pins are spaced at horizontally and vertically, in what is called high density. The other connectors with the same pin spacing are the DA-26, DB-44, DC-62, DD-78 and DF-104. They all have three rows of pins, except the DD-78 which has four, and the DF-104 which has five rows in a new, larger shell. The double density series of D-sub connectors features even denser arrangements and consists of the DE-19, DA-31, DB-52, DC-79, and DD-100. These each have three rows of pins, except the DD-100, which has four. Common misnomers The above naming pattern was not always followed. Because personal computers first used DB-25 connectors for their serial and parallel ports, when the PC serial port began to use 9-pin connectors, they were often labeled as DB-9 instead of DE-9 connectors, due to an ignorance of the fact that B represented a shell size. It is now common to see DE-9 connectors sold as DB-9 connectors. DB-9 nearly always refers to a 9-pin connector with an E-size shell. The non-standard 23-pin D-sub connectors for external floppy drives and video output on most of the Amiga computers are usually labeled DB-23, even though their shell size is two pins smaller than ordinary DB sockets. Several computers also used a non-standard 19-pin D-sub connector, sometimes called DB-19, including Macintosh (external floppy drive), Atari ST (external hard drive), and NeXT (Megapixel Display monitor and laser printer). Reflecting the same confusion of the letters DB with just D as mentioned above, high-density connectors are also often called DB-15HD (or even DB-15 or HD-15), DB-26HD (HD-26), DB-44HD, DB-62HD, and DB-78HD connectors, respectively, where HD stands for high density. Cannon also produced combo D-subs with larger contacts in place of some of the normal contacts, for use for high-current, high-voltage, or co-axial inserts. The DB13W3 variant was commonly used for high-performance video connections; this variant provided 10 regular (#20) pins plus three coaxial contacts for the red, green, and blue video signals. Combo D-subs are currently manufactured in a broad range of configurations by other companies. Some variants have current ratings up to 40 A; others are waterproof and meet IP67 standards. A further family of connectors of similar appearance to the D-sub family uses names such as HD-50 and HD-68, and has a D-shaped shell about half the width of a DB-25. They are common in SCSI attachments. The original D-sub connectors are now defined by an international standard, IEC 60807-3 / DIN 41652. The United States military also maintains another specification for D-subminiature connectors, the MIL-DTL-24308 standard. Micro-D and Nano-D Smaller connectors have been derived from the D-sub including the microminiature D (micro-D) and nanominiature D (nano-D) which are trademarks of ITT Cannon. Micro-D is about half the length of a D-sub and Nano-D is about half the length of Micro-D. Their primary applications are in military and space-grade technology such as SpaceWire networks. The MIL-SPEC for Micro-D is MIL-DTL-83513 and for Nano-D is MIL-DTL-32139. Typical applications Communications ports The widest application of D-subs is for RS-232 serial communications, though the standard did not make this connector mandatory. RS-232 devices originally used the DB-25, but for many applications the less common signals were omitted, allowing a DE-9 to be used. The standard specifies a male connector for terminal equipment and a female connector for modems, but many variations exist. IBM PC-compatible computers tend to have male connectors at the device and female connectors at the modems. Early Apple Macintosh models used DE-9 connectors for RS-422 multi-drop serial interfaces (which can operate as RS-232). Later Macintosh models use 8-pin miniature DIN connectors instead. On PCs, 25-pin and (beginning with the IBM PC/AT) 9-pin plugs were used for the RS-232 serial ports; 25-pin sockets were used for parallel ports (instead of the Centronics port found on the printer itself, which was inconveniently large for direct placement on the expansion cards). Many uninterruptible power supply units have a DE-9F connector on them in order to signal to the attached computer via an RS-232 interface. Often these do not send data serially to the computer but instead use the handshaking control lines to indicate low battery, power failure, or other conditions. Such usage is not standardized between manufacturers and may require special cables. Network ports DE-9 connectors were used for some Token Ring networks as well as other computer networks. Originally in the 1980s Ethernet network interface cards or devices were connected using Attachment Unit Interface (AUI) cables to Medium Attachment Units that then connected to 10BASE5 and later 10BASE2 or 10BASE-T network cabling. The AUI cables used DA-15 connectors albeit with a sliding latch to lock the connectors together instead of the usual hex studs with threaded holes. The sliding latch was intended to be quicker to engage and disengage and to work in places where jackscrews could not be used for reasons of component shape. In vehicles, DE-9 connectors are commonly used in Controller Area Networks (CAN): female connectors are on the bus while male connectors are on devices. Computer video output DE-9 connectors A female 9-pin connector on an IBM compatible personal computer may be a digital RGBI video display output such as MDA, Hercules, CGA, or EGA (rarely VGA or others). Even though these all use the same DE-9 connector, the displays cannot all be interchanged and monitors or video interfaces may be damaged if connected to an incompatible device using the same connector. DE-15 connectors Later analog video (VGA and later) adapters generally replaced DE-9 connectors with DE-15 high-density sockets (though some early VGA devices still used DE-9 connectors). DE-15 connectors have the same shell size as DE-9 connectors (see above). The additional pins of the DE-15 VGA connector were used to add increasingly sophisticated monitor-sensing plug and play functionality. DA-15 connectors Many Apple Macintosh models, beginning with the Macintosh II, used DA-15 sockets for analog RGB video out. These connectors had the same number of pins as the above DE-15 connectors, but used the more traditional pin size, pin spacing, and size shell of the DA-15 standard connector. "VGA adapters" (i.e. DA-15 to DE-15 dongles) were available but sometimes monitor-specific, or they needed DIP switch configuration, as the Macintosh's monitor sense pins in particular were not identical with a VGA connector's DDC. The earlier Apple IIGS used the same physical DA-15 connector for the same purpose but with an incompatible pinout. A digital (and thus also incompatible) RGB adapter for the Apple IIe also used a DA-15F. The Apple IIc used a DA-15F for an auxiliary video port which was not RGB but provided the necessary signals to derive RGB. Game controller ports DE-9 connectors The 1977 Atari Video Computer System game console uses modified DE-9 connectors (male on the system, female on the cable) for its game controller connectors. The Atari joystick ports have bodies entirely of molded plastic without the metal shield, and they omit the pair of fastening screws. In the years following, various video game consoles and home computers adopted the same connector for their own game ports, though they were not all interoperable. The most common wiring supported five connections for discrete signals (five switches, for up, down, left, and right movement, and a fire button), plus one pair of 100 kΩ potentiometers, or paddles, for analog input. Some computers supported additional buttons, and on some computers additional devices, such as a computer mouse, a light pen, or a graphics tablet were also supported via the game port. Unlike the basic one-button digital joysticks and the basic paddles, such devices were not typically interchangeable between different systems. Systems using the DE-9 connector for their game port include the TI-99/4A, Atari 8-bit computers, Atari ST, Atari 7800, VIC-20, Commodore 64, Commodore 128, Amiga, Amstrad CPC (which employs daisy-chaining when connecting two Amstrad-specific joysticks), MSX, X68000, FM Towns, ColecoVision, SG-1000, Master System, Mega Drive/Genesis, and the 3DO Interactive Multiplayer. The ZX Spectrum lacks a built-in joystick connector of any kind but aftermarket interfaces provided the ability to connect DE-9 joysticks. NEC's home computers (e.g. PC-88, PC-98) also used DE-9 connectors for game controllers, depending on the sound card used. The Fairchild Channel F System II and Bally Astrocade use DE-9 connectors for their detachable joystick as well. Both are incompatible with the Atari connector. Many Apple II computers also use DE-9 connectors for joysticks, but they have a female port on the computer and a male on the controller, use analog rather than digital sticks, and the pinout is completely unlike that used on the aforementioned systems. DE-9 connectors were not used for game ports on the Macintosh, Apple III, IBM PC compatibles, or most game consoles outside the aforementioned examples. Sega switched to proprietary controller ports for the Saturn and Dreamcast. DA-15 connectors DA-15S connectors are used for PC joystick connectors, where each DA-15 connector supports two joysticks each with two analog axes and two buttons. In other words, one DA-15S game adapter connector has 4 analog potentiometer inputs and 4 digital switch inputs. This interface is strictly input-only, though it does provide +5 V DC power. Some joysticks with more than two axes or more than two buttons use the signals designated for both joysticks. Conversely, Y-adapter cables are available that allow two separate joysticks to be connected to a single DA-15 game adapter port; if a joystick connected to one of these Y-adapters has more than two axes or buttons, only the first two of each will work. The IBM DA-15 PC game connector has been modified to add a (usually MPU-401 compatible) MIDI interface, and this is often implemented in the game connectors on third-party sound cards, for example, the Sound Blaster line from Creative Labs. The standard straight game adapter connector (introduced by IBM) has three ground pins and four +5 V power pins, and the MIDI adaptation replaces one of the grounds and one of the +5 V pins, both on the bottom row of pins, with MIDI In and MIDI Out signal pins. (There is no MIDI Thru provided.) Creative Labs introduced this adaptation. The Neo Geo AES game console also used the DA-15 connector, however, the pins are wired differently and it is therefore not compatible with the regular DA-15 PC game controllers. The Nintendo Famicom's controllers were hardwired but also included a DA-15 expansion port for additional controllers. Many clones of the hardware used a DA-15 which implemented a subset of the Famicom expansion port and were therefore compatible with some Famicom accessories. Later clones switched to the cheaper DE-9 port. The Atari 5200 also used a DA-15 instead of the DE-9 of its predecessor to facilitate the matrix for the keypad. The Atari Falcon, Atari STe and Atari Jaguar used a DE-15. Other 25-pin sockets on Macintosh computers are typically single-ended SCSI connectors, combining all signal returns into one contact (again in contrast to the Centronics C50 connector typically found on the peripheral, supplying a separate return contact for each signal), while older Sun hardware uses DD-50 connectors for Fast-SCSI equipment. As SCSI variants from Ultra2 onwards used differential signaling, the Macintosh DB-25 SCSI interface became obsolete. D-subminiature connectors are often used in industrial products, the DA-15 version being commonly used on rotary and linear encoders. The early Macintosh and late Apple II computers used a non-standard 19-pin D-sub for connecting external floppy disk drives. Atari also used this connector on their 16-bit computer range for attaching hard disk drives and the Atari laser printer, where it was known as both the ACSI (Atari Computer System Interface) port and the DMA bus port. The Commodore Amiga used an equally non-standard 23-pin version for both its video output (male) and its port for daisy-chaining up to three extra external floppy disk drives (female). In professional audio, several connections use DB-25 connectors: TASCAM and many others are using a connection over DB-25 connectors, which has been standardized into AES59. This connection transports AES3 digital audio or analog audio using the same pinout. TASCAM initially used their TDIF connection over DB-25 connectors for their multitrack recording audio equipment. The transported signals are not AES3 compatible. Roland used DB-25 connectors for their multi-track recording audio equipment (R-BUS). A few patch panels have been made which have the DB-25 connectors on the back with phone jacks (or even TRS phone connectors) on the front, however, these are normally wired for TASCAM, which is more common outside of broadcasting. In broadcast and professional video, parallel digital is a digital video interface that uses DB-25 connectors, per the SMPTE 274M specification adopted in the late 1990s. The more common SMPTE 259M serial digital interface (SDI) uses BNC connectors for digital video signal transfer. DC-37 connectors are commonly used in hospital facilities as an interface between hospital beds and nurse call systems, allowing for the connection and signaling of Nurse Call, Bed Exit, and Cord out including TV entertainment and lighting controls. The comparatively rare DC-37 connector was also found as the so-called "GeekPort" electronics experimentation breakout connector on the even rarer BeBox computer. DB-25 connectors are commonly used to carry analog signals for beam displacement and color control to laser projectors, as specified in the ISP-DB25 protocol published by the International Laser Display Association. Wire-contact attachment types There are many different methods used to attach wires to the contacts in D-sub connectors. Solder-bucket (or solder-cup) contacts have a cavity into which the stripped wire is inserted and hand-soldered. Insulation displacement contacts (IDCs) allow a ribbon cable to be forced onto sharp tines on the back of the contacts; this action pierces the insulation of all the wires simultaneously. This is a very quick means of assembly whether done by hand or machine. Crimp contacts are assembled by inserting a stripped wire end into a cavity in the rear of the contact, then crushing the cavity using a crimp tool, causing the cavity to grip the wire tightly at many points. The crimped contact is then inserted into the connector where it locks into place. Individual crimped pins can be removed later by inserting a special tool into the rear of the connector. PCB pins are soldered directly to a printed circuit board and not to a wire. Traditionally through hole plated (THP) board style pins were used (print) but increasingly gull-wing surface mount (SMD) connections are used, although the latter frequently exhibit solder pad contact problems when exposed to mechanical stress. These connectors are frequently mounted at a right angle to the PCB, allowing a cable to be plugged into the edge of the PCB assembly. Wire wrap connections are made by wrapping solid wire around a square post with a wire wrap tool. This type of connection is often used in developing prototypes. The wire wrap and IDC connections styles had to contend with incompatible pin spacing to the ribbon cable or proto board grid, especially for larger pin counts.
Technology
User interface
null
383876
https://en.wikipedia.org/wiki/Thermoproteota
Thermoproteota
The Thermoproteota are prokaryotes that have been classified as a phylum of the domain Archaea. Initially, the Thermoproteota were thought to be sulfur-dependent extremophiles but recent studies have identified characteristic Thermoproteota environmental rRNA indicating the organisms may be the most abundant archaea in the marine environment. Originally, they were separated from the other archaea based on rRNA sequences; other physiological features, such as lack of histones, have supported this division, although some crenarchaea were found to have histones. Until 2005 all cultured Thermoproteota had been thermophilic or hyperthermophilic organisms, some of which have the ability to grow at up to 113 °C. These organisms stain Gram negative and are morphologically diverse, having rod, cocci, filamentous and oddly-shaped cells. Recent evidence shows that some members of the Thermoproteota are methanogens. Thermoproteota were initially classified as a part of Regnum Eocyta in 1984, but this classification has been discarded. The term "eocyte" now applies to either TACK (formerly Crenarchaeota) or to Thermoproteota. Sulfolobus One of the best characterized members of the Crenarchaeota is Sulfolobus solfataricus. This organism was originally isolated from geothermally heated sulfuric springs in Italy, and grows at 80 °C and pH of 2–4. Since its initial characterization by Wolfram Zillig, a pioneer in thermophile and archaean research, similar species in the same genus have been found around the world. Unlike the vast majority of cultured thermophiles, Sulfolobus grows aerobically and chemoorganotrophically (gaining its energy from organic sources such as sugars). These factors allow a much easier growth under laboratory conditions than anaerobic organisms and have led to Sulfolobus becoming a model organism for the study of hyperthermophiles and a large group of diverse viruses that replicate within them. Recombinational repair of DNA damage Irradiation of S. solfataricus cells with ultraviolet light strongly induces formation of type IV pili that can then promote cellular aggregation. Ultraviolet light-induced cellular aggregation was shown by Ajon et al. to mediate high frequency inter-cellular chromosome marker exchange. Cultures that were ultraviolet light-induced had recombination rates exceeding those of uninduced cultures by as much as three orders of magnitude. S. solfataricus cells are only able to aggregate with other members of their own species. Frols et al. and Ajon et al. considered that the ultraviolet light-inducible DNA transfer process, followed by homologous recombinational repair of damaged DNA, is an important mechanism for promoting chromosome integrity. This DNA transfer process can be regarded as a primitive form of sexual interaction. Marine species Beginning in 1992, data were published that reported sequences of genes belonging to the Thermoproteota in marine environments., Since then, analysis of the abundant lipids from the membranes of Thermoproteota taken from the open ocean have been used to determine the concentration of these “low temperature Crenarchaea” (See TEX-86). Based on these measurements of their signature lipids, Thermoproteota are thought to be very abundant and one of the main contributors to the fixation of carbon . DNA sequences from Thermoproteota have also been found in soil and freshwater environments, suggesting that this phylum is ubiquitous to most environments. In 2005, evidence of the first cultured “low temperature Crenarchaea” was published. Named Nitrosopumilus maritimus, it is an ammonia-oxidizing organism isolated from a marine aquarium tank and grown at 28 °C. Possible connections with eukaryotes The research about two-domain system of classification has paved the possibilities of connections between crenarchaea and eukaryotes. DNA analysis from 2008 (and later, 2017) has shown that eukaryotes possible evolved from thermoproteota-like organisms. Other candidates for the ancestor of eukaryotes include closely related asgards. This could suggest that eukaryotic organisms possibly evolved from prokaryotes. These results are similar to the eocyte hypothesis of 1984, proposed by James A. Lake. The classification according to Lake, states that both crenarchaea and asgards belong to Kingdom Eocyta. Though this has been discarded by scientists, the main concept remains. The term "Eocyta" now either refers to the TACK group or to Phylum Thermoproteota itself. However, the topic is highly debated and research is still going on.
Biology and health sciences
Archaea
Plants
384591
https://en.wikipedia.org/wiki/Mil%20Mi-24
Mil Mi-24
The Mil Mi-24 (; NATO reporting name: Hind) is a large helicopter gunship, attack helicopter and low-capacity troop transport with room for eight passengers. It is produced by Mil Moscow Helicopter Plant and was introduced by the Soviet Air Force in 1972. The helicopter is currently in use by 58 countries. In NATO circles, the export versions, Mi-25 and Mi-35, are denoted with a letter suffix as "Hind D" and "Hind E". Soviet pilots called the Mi-24 the "flying tank" (), a term used historically with the famous World War II Soviet Il-2 Shturmovik armored ground attack aircraft. Other common unofficial nicknames were "Galina" (or "Galya"), "Crocodile" (), due to the helicopter's camouflage scheme, and "Drinking Glass" (), because of the flat glass plates that surround earlier Mi-24 variants' cockpits. Development During the early 1960s, it became apparent to Soviet designer Mikhail Mil that the trend towards ever-increasing battlefield mobility would result in the creation of flying infantry fighting vehicles, which could be used to perform both fire support and infantry transport missions. The first expression of this concept was a mock-up unveiled in 1966 in the experimental department of the Ministry of Aircraft's factory number 329, where Mil was head designer. The mock-up designated V-24 was based on another project, the V-22 utility helicopter, which never flew. The V-24 had a central infantry compartment that could hold eight troops sitting back to back, and a set of small wings positioned to the top rear of the passenger cabin, capable of holding up to six missiles or rockets and a twin-barreled GSh-23L cannon fixed to the landing skid. Mil proposed the design to the heads of the Soviet armed forces. While he had the support of a number of strategists, he was opposed by several more senior members of the armed forces, who believed that conventional weapons were a better use of resources. Despite the opposition, Mil managed to persuade the defence minister's first deputy, Marshal Andrey A. Grechko, to convene an expert panel to look into the matter. While the panel's opinions were mixed, supporters of the project eventually held sway and a request for design proposals for a battlefield support helicopter was issued. The development and use of gunships and attack helicopters by the US Army during the Vietnam War convinced the Soviets of the advantages of armed helicopter ground support, and fostered support for the development of the Mi-24. Mil engineers prepared two basic designs: a 7-ton single-engine design and a 10.5-ton twin-engine design, both based on the 1,700 hp Izotov TV3-177A turboshaft. Later, three complete mock-ups were produced, along with five cockpit mock-ups to allow the pilot and weapon station operator positions to be fine-tuned. The Kamov design bureau suggested an army version of their Ka-25 ASW helicopter as a low-cost option. This was considered but later dropped in favor of the new Mil twin-engine design. A number of changes were made at the insistence of the military, including the replacement of the 23 mm cannon with a rapid-fire heavy machine gun mounted in a chin turret, and the use of the 9K114 Shturm (AT-6 Spiral) anti-tank missile. A directive was issued on 6 May 1968 to proceed with the development of the twin-engine design. Work proceeded under Mil until his death in 1970. Detailed design work began in August 1968 under the codename Yellow 24. A full-scale mock-up of the design was reviewed and approved in February 1969. Flight tests with a prototype began on 15 September 1969 with a tethered hover, and four days later the first free flight was conducted. A second prototype was built, followed by a test batch of ten helicopters. Acceptance testing for the design began in June 1970, continuing for 18 months. Changes made in the design addressed structural strength, fatigue problems and vibration levels. Also, a 12-degree anhedral was introduced to the wings to address the aircraft's tendency to Dutch roll at speeds in excess of 200 km/h (124 mph), and the Falanga missile pylons were moved from the fuselage to the wingtips. The tail rotor was moved from the right to the left side of the tail, and the rotation direction reversed. The tail rotor now rotated up on the side towards the front of the aircraft, into the downwash of the rotor, which increased its efficiency. A number of other design changes were made until the production version Mi-24A (izdeliye 245) entered production in 1970, obtaining its initial operating capability in 1971 and was officially accepted into the state arsenal in 1972. In 1972, following completion of the Mi-24, development began on a unique attack helicopter with transport capability. The new design had a reduced transport capability (three troops instead of eight) and was called the Mi-28, and that of the Ka-50 attack helicopter, which is smaller and more maneuverable and does not have the large cabin for carrying troops. In October 2007, the Russian Air Force announced it would replace its Mi-24 fleet with Mi-28Ns and Ka-52s by 2015. However, after the successful operation of the type in Syria it was decided to keep it in service and upgrade it with new electronics, sights, arms and night vision goggles. Design Overview The core of the aircraft was derived from the Mil Mi-8 (NATO reporting name "Hip") with two top-mounted turboshaft engines driving a mid-mounted five-blade main rotor and a three-blade tail rotor. The engine configuration gave the aircraft its distinctive double air intake. Original versions have an angular greenhouse-style cockpit; Model D and later have a characteristic tandem cockpit with a "double bubble" canopy. Other airframe components came from the Mi-14 "Haze". Two mid-mounted stub wings provide weapon hardpoints, each offering three stations, in addition to providing lift. The loadout mix is mission dependent; Mi-24s can be tasked with close air support, anti-tank operations, or aerial combat. The Mi-24's titanium rotor blades are resistant to 12.7 mm (.50 caliber) rounds. The cockpit is protected by ballistic-resistant windscreens and a titanium-armored tub. The cockpit and crew compartment are overpressurized to protect the crew in NBC conditions. Flight characteristics Considerable attention was given to making the Mi-24 fast. The airframe was streamlined, and fitted with retractable tricycle undercarriage landing gear to reduce drag. At high speed, the wings provide considerable lift (up to a quarter of total lift). The main rotor was tilted 2.5° to the right from the fuselage to compensate for translating tendency at a hover. The landing gear was also tilted to the left so that the rotor would still be level when the aircraft was on the ground, making the rest of the airframe tilt to the left. The tail was also asymmetrical to give a side force at speed, thus unloading the tail rotor. A modified Mi-24B, named A-10, was used in several speed and time-to-climb world record attempts. The helicopter had been modified to reduce weight as much as possible—one measure was the removal of the stub wings. The previous official speed record was set on 13 August 1975 over a closed course of ; many of the female-specific records were set by the all-female crew of Galina Rastorguyeva and Lyudmila Polyanskaya. On 21 September 1978, the A-10 set the absolute speed record for helicopters with over a 15/25 km course. The record stood until 1986, when it was broken by the current official record holder, a modified British Westland Lynx. Comparison to Western helicopters As a combination of armoured gunship and troop transport, the Mi-24 has no direct NATO counterpart. While the UH-1 ("Huey") helicopters were used by the US in the Vietnam War either to ferry troops, or as gunships, they were not able to do both at the same time. Converting a UH-1 into a gunship meant stripping the entire passenger area to accommodate extra fuel and ammunition, and removing its troop transport capability. The Mi-24 was designed to do both, and this was greatly exploited by airborne units of the Soviet Army during the 1980–89 Soviet–Afghan War. The closest Western equivalent was the American Sikorsky S-67 Blackhawk, which used many of the same design principles and was also built as a high-speed, high-agility attack helicopter with limited troop transport capability using many components from the existing Sikorsky S-61. The S-67, however, was never adopted for service. Other Western equivalents are the Romanian Army's IAR 330, which is a licence-built armed version of the Aérospatiale SA 330 Puma, and the MH-60 Direct Action Penetrator, a special purpose armed variant of the Sikorsky UH-60 Black Hawk. Operational history Ogaden War (1977–1978) The first combat use of the Mi-24 was with the Ethiopian forces during the Ogaden War against Somalia. The helicopters formed part of a massive airlift of military equipment from the Soviet Union, after the Soviets switched sides towards the end of 1977. The helicopters were instrumental in the combined air and ground assault that allowed the Ethiopians to retake the Ogaden by the beginning of 1978. Chadian–Libyan conflict (1978–1987) The Libyan air force used Mi-24A and Mi-25 units during their numerous interventions in Chad's civil war. The Mi-24s were first used in October 1980 in the battle of N'Djamena, where they helped the People's Armed Forces seize the capital. In March 1987, the Armed Forces of the North, which were backed by the US and France, captured a Libyan air force base at Ouadi-Doum in Northern Chad. Among the aircraft captured during this raid were three Mi-25s. These were supplied to France, which in turn sent one to the United Kingdom and one to the US. Soviet war in Afghanistan (1979–1989) The aircraft was operated extensively during the Soviet–Afghan War, mainly for bombing Mujahideen fighters. When the U.S. supplied heat-seeking Stinger missiles to the Mujahideen, the Soviet Mi-8 and Mi-24 helicopters proved to be favorite targets of the rebels. It is difficult to find the total number of Mi-24s used in Afghanistan. At the end of 1990, the whole Soviet Army had 1,420 Mi-24s. During the Afghan war, sources estimated the helicopter strength to be as much as 600 units, with up to 250 being Mi-24s, whereas a (formerly secret) 1987 Central Intelligence Agency (CIA) report says that the number of Mi-24s in theatre increased from 85 in 1980 to 120 in 1985. First deployment and combat In April 1979, Mi-24s were supplied to the Afghan government to deal with Mujahideen guerrillas. The Afghan pilots were well-trained and made effective use of their machines, but the Mujahideen were not easy targets. The first Mi-24 to be lost in action was shot down by guerrillas on 18 July 1979. Despite facing strong resistance from Afghan rebels, the Mi-24 proved to be very destructive. The rebels called the Mi-24 "Shaitan-Arba (Satan's Chariot)". In one case, an Mi-24 pilot who was out of ammunition managed to rescue a company of infantry by maneuvering aggressively towards Mujahideen guerrillas and scaring them off. The Mi-24 was popular with ground troops, since it could stay on the battlefield and provide fire as needed, while "fast mover" strike jets could only stay for a short time before heading back to base to refuel. The Mi-24's favoured munition was the S-8 rocket, the S-5 having proven too light to be effective. The gun pod was also popular. Extra rounds of rocket ammunition were often carried internally so that the crew could land and self-reload in the field. The Mi-24 could carry ten iron bombs for attacks on camps or strongpoints, while harder targets could be dealt with a load of four or two iron bombs. Some Mi-24 crews became experts at dropping bombs precisely on targets. Fuel-air explosive bombs were also used in a few instances, though crews initially underestimated the sheer blast force of such weapons and were caught by the shock waves. The 9K114 Shturm was used infrequently, largely due to a lack of targets early in the war that required the precision and range the missile offered and a need to keep to stocks of anti tank missiles in Europe. After the Mujahideen got access to more advanced anti aircraft weapons later in the war the Shturm was used more often by Mi-24 units. Combat experience quickly demonstrated the disadvantages of having an Mi-24 carrying troops. Gunship crews found the soldiers a concern and a distraction while being shot at, and preferred to fly lightly loaded anyway, especially given their operations from high ground altitudes in Afghanistan. Mi-24 troop compartment armour was often removed to reduce weight. Troops would be carried in Mi-8 helicopters while the Mi-24s provided fire support. It proved useful to carry a technician in the Mi-24's crew compartment to handle a light machine gun in a window port. This gave the Mi-24 some ability to "watch its back" while leaving a target area. In some cases, a light machine gun was fitted on both sides to allow the technician to move from one side to the other without having to take the machine gun with him. This weapon configuration still left the gunship blind to the direct rear, and Mil experimented with fitting a machine gun in the back of the fuselage, accessible to the gunner through a narrow crawl-way. The experiment was highly unsuccessful, as the space was cramped, full of engine exhaust fumes, and otherwise unbearable. During a demonstration, an overweight Soviet Air Force general got stuck in the crawl-way. Operational Mi-24s were retrofitted with rear-view mirrors to help the pilot spot threats and take evasive action. Besides protecting helicopter troop assaults and supporting ground actions, the Mi-24 also protected convoys, using rockets with flechette warheads to drive off ambushes; performed strikes on predesignated targets; and engaged in "hunter-killer" sweeps. Hunter-killer Mi-24s operated at a minimum in pairs, but were more often in groups of four or eight, to provide mutual fire support. The Mujahideen learned to move mostly at night to avoid the gunships, and in response the Soviets trained their Mi-24 crews in night-fighting, dropping parachute flares to illuminate potential targets for attack. The Mujahideen quickly caught on and scattered as quickly as possible when Soviet target designation flares were lit nearby. Attrition in Afghanistan The war in Afghanistan brought with it losses by attrition. The environment itself, dusty and often hot, was rough on the machines; dusty conditions led to the development of the twin PZU ('PyleZashchitnoe Ustroystvo') air intake filters. The rebels' primary air-defence weapons early in the war were heavy machine guns and anti-aircraft cannons, though anything smaller than a 23 millimetre shell generally did not do much damage to an Mi-24. The cockpit glass panels were resistant to 12.7 mm (.50 in calibre) rounds. The rebels also quickly began to use Soviet-made and US shoulder-launched, man-portable air-defense system (MANPADS) missiles such as the Strela and Redeye which had either been captured from the Soviets or their Afghan allies or were supplied from Western sources. Many of them came from stocks that the Israelis had captured during wars with Soviet backed states in the Middle East. Owing to a combination of the limited capabilities of these early types of missiles, poor training and poor material condition of the missiles, they were not particularly effective. Instead, the RPG-7, originally developed as an antitank weapon, was the first effective countermeasure to the Hind. The RPG-7, not designed for air defence, had inherent shortcomings in this role. When fired at the angles needed to hit aerial targets, the back-blast could easily wound the shooter, and the inevitable cloud of smoke and dust made it easy for gunners to spot the shooter's position. From 1986, the CIA began supplying the Afghan rebels with newer Stinger shoulder-launched, heat-seeking SAMs. These were a marked improvement over earlier weapons. Unlike the Redeye and SA-7, which locked on to only infrared emissions, the Stinger could lock onto both infrared and ultraviolet emissions. This enabled the operator to engage an aircraft from all angles rather than just the tail and made it significantly more resistant to countermeasures like flares. In addition the Mil helicopters, particularly the Mi-24, suffered from a design flaw in the configuration of their engines that made them highly vulnerable to the Stinger. The Mi-24, along with the related Mi-8 and Mi-17 helicopters, had its engines placed in an inline configuration in an attempt to streamline the helicopter to increase speed and minimize the aircraft's overall frontal profile to incoming fire in a head on attack. However this had the opposite effect of leaking all the exhaust gasses from the Mi-24's engines directly out the side of the aircraft and away from the helicopter's rotor wash, creating two massive sources of heat and ultraviolet radiation for the Stinger to lock onto. The inline placement of the engines was seen as so problematic in this regard that Mil designers abandoned the configuration on the planned successor to the Mi-24, the Mil Mi-28, in favour of an engine placement more akin to Western attack helicopters which vents the exhaust gasses into the helicopter's main rotor wash to dissipate heat. Initially, the attack doctrine of the Mi-24 was to approach its target from high altitude and dive downwards. After the introduction of the Stinger, doctrine changed to "nap of the earth" flying, where they approached very low to the ground and engaged more laterally, popping up to only about in order to aim rockets or cannons. Countermeasure flares and missile warning systems would be installed in all Soviet Mil Mi-2, Mi-8, and Mi-24 helicopters, giving pilots a chance to evade missiles fired at them. Heat dissipation devices were also fitted to exhausts to decrease the Mi-24's heat signature. Tactical and doctrinal changes were introduced to make it harder for the enemy to deploy these weapons effectively. These reduced the Stinger threat, but did not eliminate it. Mi-24s were also used to shield jet transports flying in and out of Kabul from Stingers. The gunships carried flares to blind the heat-seeking missiles. The crews called themselves "Mandatory Matrosovs", after a Soviet hero of World War II who threw himself across a German machine gun to let his comrades break through. According to Russian sources, 74 helicopters were lost, including 27 shot down by Stinger and two by Redeye. In many cases, the helicopters with their armour and durable construction could withstand significant damage and able to return to base. Mi-24 crews and end of Soviet involvement Mi-24 crews carried AK-74 assault rifles and other hand-held weapons to give them a better chance of survival if forced down. Early in the war, Marat Tischenko, head of the Mil design bureau visited Afghanistan to see what the troops thought of his helicopters, and gunship crews put on several displays for him. They even demonstrated manoeuvres, such as barrel rolls, which design engineers considered impossible. An astounded Tischenko commented, "I thought I knew what my helicopters could do, now I'm not so sure!" The last Soviet Mi-24 shot down was during the night of 2 February 1989, with both crewmen killed. It was also the last Soviet helicopter lost during nearly 10 years of warfare. Mi-24s in Afghanistan after Soviet withdrawal Mi-24s passed on to Soviet-backed Afghan forces during the war remained in dwindling service in the grinding civil war that continued after the Soviet withdrawal. Afghan Air Force Mi-24s in the hands of the ascendant Taliban gradually became inoperable, but a few flown by the Northern Alliance, which had Russian assistance and access to spares, remained operational up to the US invasion of Afghanistan in late 2001. In 2008, the Afghan Air Force took delivery of six refurbished Mi-35 helicopters, purchased from the Czech Republic. The Afghan pilots were trained by India and began live firing exercises in May 2009 in order to escort Mi-17 transport helicopters on operations in restive parts of the country. Iran–Iraq War (1980–1988) The Mi-25 saw considerable use by the Iraqi Army during the long war against Iran. Its heavy armament caused severe losses to Iranian ground forces during the war. However, the Mi-25 lacked an effective anti-tank capability, as it was only armed with obsolete 9M17 Skorpion missiles. This led the Iraqis to develop new gunship tactics, with help from East German advisors. The Mi-25s would form "hunter-killer" teams with French-built Aérospatiale Gazelles, with the Mi-25s leading the attack and using their massive firepower to suppress Iranian air defences, and the Gazelles using their HOT missiles to engage armoured fighting vehicles. These tactics proved effective in halting Iranian offensives, such as Operation Ramadan in July 1982. This war also saw the only confirmed air-to-air helicopter battles in history with the Iraqi Mi-25s flying against Iranian AH-1J SeaCobras (supplied by the United States before the Iranian Revolution) on several separate occasions. In November 1980, not long after Iraq's initial invasion of Iran, two Iranian SeaCobras engaged two Mi-25s with TOW wire-guided antitank missiles. One Mi-25 went down immediately, the other was badly damaged and crashed before reaching base. The Iranians repeated this accomplishment on 24 April 1981, destroying two Mi-25s without incurring losses to themselves. One Mi-25 was also downed by an IRIAF F-14A. The Iraqis hit back, claiming the destruction of a SeaCobra on 14 September 1983 (with YaKB machine gun), then three SeaCobras on 5 February 1984 and three more on 25 February 1984 (two with Falanga missiles, one with S-5 rockets). A 1982 news article published on the Iraqi Observer claimed an Iraqi Mi-24D shot down an Iranian F-4 Phantom II using its armaments, either antitank missiles, guns or S-5 unguided rockets. After a lull in helicopter losses, each side lost a gunship on 13 February 1986. Later, a Mi-25 claimed a SeaCobra shot down with YaKB gun on 16 February, and a SeaCobra claimed a Mi-25 shot down with rockets on 18 February. The last engagement between the two types was on 22 May 1986, when Mi-25s shot down a SeaCobra. The final claim tally was 10 SeaCobras and 6 Mi-25s destroyed. The relatively small numbers and the inevitable disputes over actual kill numbers makes it unclear if one gunship had a real technical superiority over the other. Iraqi Mi-25s also claimed 43 kills against other Iranian helicopters, such as Agusta-Bell UH-1 Hueys. In general, the Iraqi pilots liked the Mi-25, in particular for its high speed, long range, high versatility and large weapon load, but disliked the relatively ineffectual anti-tank guided weapons and lack of agility. Nicaraguan civil war (1980–1988) Mi-25s were also used by the Nicaraguan Army during the civil war of the 1980s. Nicaragua received 12 Mi-25s (some sources claim 18) in the mid-1980s to deal with "Contra" insurgents. The Mi-25s performed ground attacks on the Contras and were also fast enough to intercept light aircraft being used by the insurgents. The U.S. Reagan Administration regarded introduction of the Mi-25s as a major escalation of tensions in Central America. Two Mi-25s were shot down by Stingers fired by the Contras. A third Mi-25 was damaged while pursuing Contras near the Honduran border, when it was intercepted by Honduran F-86 Sabres and A-37 Dragonflies. A fourth was flown to Honduras by a defecting Sandinista pilot in December 1988. Sri Lankan Civil War (1987–2009) The Indian Peace Keeping Force (1987–90) in Sri Lanka used Mi-24s when an Indian Air Force detachment was deployed there in support of the Indian and Sri Lankan armed forces in their fight against various Tamil militant groups such as the Liberation Tigers of Tamil Eelam (LTTE). It is believed that Indian losses were considerably reduced by the heavy fire support from their Mi-24s. The Indians lost no Mi-24s in the operation, as the Tigers had no weapons capable of downing the gunship at the time. Since 14 November 1995, the Mi-24 has been used by the Sri Lanka Air Force in the war against the LTTE liberation group and has proved highly effective at providing close air support for ground forces. The Sri Lanka Air Force operates a mix of Mi-24/-35P and Mi-24V/-35 versions attached to its No. 9 Attack Helicopter Squadron. They have recently been upgraded with modern Israeli FLIR and electronic warfare systems. Five were upgraded to intercept aircraft by adding radar, fully functional helmet mounted target tracking systems, and AAMs. More than five Mi-24s have been lost to LTTE MANPADS, and another two lost in attacks on air bases, with one heavily damaged but later returned to service. Peruvian operations (1989–present) The Peruvian Air Force received 12 Mi-25Ds and 2 Mi-25DU from the Soviets in 1983, 1984, and 1985 after ordering them in the aftermath of 1981 Paquisha conflict with Ecuador. Seven more second hand units (4 Mi-24D and 3 Mi-25D) were obtained from Nicaragua in 1992. These have been permanently based at the Vitor airbase near La Joya ever since, operated by the 2nd Air Group of the 211th Air Squadron. Their first deployment occurred in June 1989 during the war against Communist guerrillas in the Peruvian highlands, mainly against Shining Path. Despite the conflict continuing, it has decreased in scale and is now limited to the jungle areas of Valley of Rivers Apurímac, Ene and Mantaro (VRAEM). Persian Gulf War (1991) The Mi-24 was also heavily employed by the Iraqi Army during their invasion of Kuwait, although most were withdrawn by Saddam Hussein when it became apparent that they would be needed to help retain his grip on power in the aftermath of the war. In the ensuing 1991 uprisings in Iraq, these helicopters were used against dissidents as well as fleeing civilian refugees. Sierra Leone Civil War (1991–2002) Three Mi-24Vs owned by Sierra Leone and flown by South African military contractors, including Neall Ellis, were used against RUF rebels. In 1995, they helped drive the RUF from the capital, Freetown. Neall Ellis also piloted a Mi-24 during the British-led Operation Barras against West Side Boys. Guinea also used its Mi-24s against the RUF on both sides of the border and was alleged to have provided air support to the LURD insurgency in northern Liberia in 2001–03. Croatian War of Independence (1990s) Twelve Mi-24s were delivered to Croatia in 1993, and were used effectively in 1995 by the Croatian Army in Operation Storm against the Army of Krajina. The Mi-24 was used to strike deep into enemy territory and disrupt Krajina army communications. One Croatian Mi-24 crashed near the city of Drvar, Bosnia and Herzegovina due to strong winds. Both the pilot and the operator survived. The Mi-24s used by Croatia were obtained from Ukraine. One Mi-24 was modified to carry Mark 46 torpedoes. The helicopters were withdrawn from service in 2004. First and Second Wars in Chechnya (1990s–2000s) During the First and Second Chechen Wars, beginning in 1994 and 1999 respectively, Mi-24s were employed by the Russian armed forces. In the first year of the Second Chechen War, 11 Mi-24s were lost by Russian forces, about half of which were lost as a result of enemy action. Cenepa War (1995) Peru employed Mi-25s against Ecuadorian forces during the short Cenepa conflict in early 1995. The only loss occurred on 7 February, when a FAP Mi-25 was downed after being hit in quick succession by at least two, probably three, 9K38 Igla shoulder-fired missiles during a low-altitude mission over the Cenepa valley. The three crewmen were killed. By 2011 two Mi-35P were purchased from Russia to reinforce the 211th Air Squadron. Sudanese Civil War (1995–2005) In 1995, the Sudanese Air Force acquired six Mi-24s for use in Southern Sudan and the Nuba mountains to engage the SPLA. At least two aircraft were lost in non-combat situations within the first year of operation. A further twelve were bought in 2001, and used extensively in the oil fields of Southern Sudan. Mi-24s were also deployed to Darfur in 2004–05. First and Second Congo Wars (1996–2003) Three Mi-24s were used by Mobutu's army and were later acquired by the new Air Force of the Democratic Republic of the Congo. These were supplied to Zaire in 1997 as part of a French-Serbian contract. At least one was flown by Serbian mercenaries. One hit a power line and crashed on 27 March 1997, killing the three crew and four passengers. Zimbabwean Mi-24s were also operated in coordination with the Congolese Army. The United Nations peacekeeping mission employed Indian Air Force Mi-24/-35 helicopters to provide support during the Second Congo War. The IAF has been operating in the region since 2003. Kosovo War (1998–1999) Two second-hand Mi-24Vs procured from Ukraine earlier in the 1990s were used by the Yugoslav Special Operation Unit (JSO) against Kosovo Albanian rebels during the Kosovo War. Insurgency in Macedonia (2001) The Macedonian military acquired used Ukrainian Mi-24Vs, which were then used frequently against Albanian insurgents during the 2001 insurgency in Macedonia (now North Macedonia). The main areas of action were in Tetovo, Radusha and Aracinovo. Ivorian Civil War (2002–2004) During the Ivorian Civil War, five Mil Mi-24s piloted by mercenaries were used in support of government forces. They were later destroyed by the French Army in retaliation for an air attack on a French base that killed nine soldiers. War in Afghanistan (2001–2021) In 2008 and 2009, the Czech Republic donated six Mi-24s under the ANA Equipment Donation Programme. As a result, the Afghan National Army Air Corps (ANAAC) gained the ability to escort its own helicopters with heavily armed attack helicopters. ANAAC operates nine Mi-35s. Major Caleb Nimmo, a United States Air Force Pilot, was the first American to fly the Mi-35 Hind, or any Russian helicopter, in combat. On 13 September 2011, a Mi-35 of the Afghan Air Force was used to hold back an attack on ISAF and police buildings. The Polish Helicopter Detachment contributed Mi-24s to the International Security Assistance Force (ISAF). The Polish pilots trained in Germany before deploying to Afghanistan and train with U.S. service personnel. On 26 January 2011, one Mi-24 caught on fire during take-off from its base in Ghazni. One American and four Polish soldiers evacuated unharmed. India has also donated Mi-35s to Afghanistan. Four helicopters were to be supplied, with three already transferred in January 2016. The three Mi-35s made a big difference in the offensive against militants, according to General John Campbell, commander of US forces in Afghanistan. Iraq War (2003–2011) The Polish contingent in Iraq used six Mi-24Ds after December 2004. One of them crashed on 18 July 2006 in an air base in Al Diwaniyah. Polish Mi-24Ds used in Iraq were not returned to Poland due to their age, condition, low combat value of the Mi-24D variant, and high shipping costs; depending on their condition, they were transferred to the new Iraqi Army or scrapped. War in Somalia (2006–2009) The Ethiopian Air Force operated about three Mil Mi-35 and ten Mil Mi-24D helicopter gunships in the Somali theatre. One was shot down near Mogadishu International Airport on 30 March 2007 by Somali insurgents. 2008 Russo-Georgian War Mil Mi-24s were used by both sides during the fighting in South Ossetia. During the war Georgian Air Force Mi-24s attacked their first targets on an early morning hour of 8 August, targeting the Ossetian presidential palace. The second target was a cement factory near Tskhinvali, where major enemy forces and ammunition were located. The last combat mission of the GAF Mi-24s was on 11 August, when a large Russian convoy, consisting of light trucks and BMP IFVs which were heading to the Georgian village of Avnevi was targeted by Mi-24s, completely destroying the convoy. The Georgian Air Force lost 2 Mi-24s on Senaki air base. They were destroyed by Russian troops on the ground. Both helicopters were in-operational. The Russian army heavily used Mi-24s in the conflict. Russian upgraded Mi-24PNs were credited for destroying 2 Georgian T-72SIM1 tanks, using guided missiles at night time, though some sources attribute those kills to Mil Mi-28. The Russian army did not lose any Mi-24s throughout the conflict, mainly because those helicopters were deployed to areas where Georgian air defence was not active, though some were damaged by small arms fire and at least one Mi-24 was lost due to technical reasons. War in Chad (2008) On returning to Abeche, one of the Chadian Mi-35s made a forced landing at the airport. It was claimed that it was shot down by rebels. Libyan civil war (2011) The Libyan Air Force Mi-24s were used by both sides to attack enemy positions during the 2011 Libyan civil war. A number were captured by the rebels, who formed the Free Libyan Air Force together with other captured air assets. During the battle for Benina airport, one Mi-35 (serial number 853), was destroyed on the ground on 23 February 2011. In the same action, serial number 854 was captured by the rebels together with an Mi-14 (serial number 1406). Two Mi-35s operating for the pro-Gaddafi Libyan Air Force were destroyed on the ground on 26 March 2011 by French aircraft enforcing the no-fly zone. One Free Libyan Air Force Mi-25D (serial number 854, captured at the beginning of the revolt) violated the no-fly-zone on 9 April 2011 to strike loyalist positions in Ajdabiya. It was shot down by Libyan ground forces during the action. The pilot, Captain Hussein Al-Warfali, died in the crash. The rebels claimed that a number of other Mi-25s were shot down. 2010–2011 Ivorian crisis Ukrainian army Mi-24P helicopters as part of the United Nations peacekeeping force fired four missiles at a pro-Gbagbo military camp in Ivory Coast's main city of Abidjan. Syrian Civil War (2011–present) The Syrian Air Force has used Mi-24s during the ongoing Syrian Civil War, including in many of the country's major cities. Controversy has surrounded an alleged delivery of Mi-25s to the Syrian military, due to Turkey and other NATO members disallowing such arms shipments through their territory. On 3 November 2016, a Russian Mi-35 made an emergency landing near Syria's Palmyra city, and was hit and destroyed, most likely by an unguided recoilless weapon after it touched down. The crew returned safely to the Khmeimim air base. Second Kachin conflict (2011–present) The Myanmar Air Force used the Mi-24 in the Kachin conflict against the Kachin Independence Army. Two Mi-35 helicopters were shot down by the Kachin Independence Army during the heavy fighting in the mountains of northern Burma in 2012 and early 2013. On 3 May 2021, in the morning, a Myanmar Air Force Mi-35 was shot down by the Kachin Independence Army, hit by a MANPADS during air raids involving attack helicopters and fighter jets. A video emerged showing the helicopter being hit while flying over a village. Post-U.S. Iraqi insurgency Iraq ordered a total of 34 Mi-35Ms in 2013, as part of an arms deal with Russia that also included Mi-28 attack helicopters. The delivery of the first four was announced by then-Prime Minister Nuri al-Maliki in November 2013. Their first deployment began in late December against camps of the al-Qaeda linked Islamic State of Iraq and the Levant (ISIL) and several Islamist militants in the al-Anbar province that had taken control of several areas of Fallujah and Ramadi. FLIR footage of the strikes has been released by the military. On 3 October 2014, ISIL militants reportedly used a FN-6 shoulder-launched missile in Baiji to shoot down an Iraqi Army Mi-35M attack helicopter. Video footage released by ISIL militants shows at least another two Iraqi Mi-35s brought down by light anti-aircraft artillery. Balochistan Insurgency (2012–present) In 2018, Pakistan received 4 Mi-35M Hind-E Gunships from Russia under the $153 million deal. They are now stationed at the Army Aviation Corps base at Quetta Cantonment. The gunships have since been used in several counter insurgency operations against various militant groups in the Balochistan province of Pakistan. In early 2022, a base in Nushki and a check-post in Panjgur belonging to the Frontier Corps Balochistan Paramilitary were attacked by BLA terrorists. The attack in Nushki was swiftly repulsed but the situation in Panjgaur was not good to which Mi-35 Hind and AH-1F Cobra gunships were called in for support. It provided much needed ground support and reconnaissance in the counter offensive which led to success. Russian annexation of Crimea (2014) During the annexation of Crimea by the Russian Federation, Russia deployed 13 Mi-24s to support their infantry as they advanced through the region. However these aircraft saw no combat during their deployment. War in Donbas (2014-2022) During the Siege of Sloviansk, on 2 May 2014, two Ukrainian Mi-24s were shot down by pro-Russian insurgents. The Ukrainian armed forces claim that they were downed by MANPADS while on patrol close to Sloviansk. The Ukrainian government confirmed that both aircraft were shot down, along with an Mi-8 damaged by small arms fire. Initial reports mentioned two dead and others wounded; later, five crew members were confirmed dead and one taken prisoner until being released on 5 May. On 5 May 2014, another Ukrainian Mi-24 was forced to make an emergency landing after being hit by machine gun fire while on patrol close to Sloviansk. The Ukrainian forces recovered the two pilots and destroyed the helicopter with a rocket strike by an Su-25 aircraft to prevent its capture by pro-Russian insurgents. Ukrainian Su-25s, with MiG-29 fighters providing top cover, supported Mi-24s during the battle for Donetsk Airport. On 13 October 2018, a Ukrainian Mi-24 shot down an Orlan-10 UAV using cannon fire near Lysychansk. Chadian offensive against Boko Haram (2015) Chadian Mi-24s were used during the 2015 West African offensive against Boko Haram. Azerbaijan-Karabakh (2014–2016, 2020) On 12 November 2014, Azerbaijani forces shot down an Armenian forces Mi-24 from a formation of two which were flying along the disputed border, close to the frontline between Azerbaijani and Armenian troops in the disputed Karabakh territory. The helicopter was hit by an Igla-S shoulder-launched missile fired by Azerbaijani soldiers while flying at low altitude and crashed, killing all three on board. On 2 April 2016, during a clash between Azerbaijani and Armenian forces, an Azerbaijani Mi-24 helicopter was shot down by "Nagorno-Karabakh" forces. The downing was confirmed by the Azerbaijani defence ministry. On 9 November 2020, during the Nagorno-Karabakh war a Russian Mi-24 was shot down by Azerbaijani forces with a MANPADS. The Azerbaijan Foreign Ministry stated that the downing was an accident. Two crew members were killed and one sustained moderate injuries. The Russian defence ministry confirmed the downing in a press release the same day. Russian invasion of Ukraine (2022–present) During the 2022 Russian invasion of Ukraine, both Ukraine and Russia have used Mi-24 helicopters. On 1 March 2022, Ukrainian forces shot down a Russian Mi-35M helicopter with MANPADS, in the Kyiv Reservoir (see also Battle of Kyiv). On 5 May 2022, the helicopter was retrieved by Ukrainian engineers in Vyshgorod. Two Russian Mi-35 were shot down by a MANPADS on 5 March 2022. On 6 March, one Mi-24P with registration number RF-94966 was shot down by Ukrainian MANPADS in Kyiv Oblast. On 8 March 2022 one Ukrainian Mil Mi-24 from the was lost over Brovary, Kyiv. Pilots Col. Oleksandr Maryniak and Cptn. Ivan Bezzub were killed. On 17 March a Russian Mi-35M was reported destroyed by Ukrainian Ministry of Defence, unknown location. On 1 April 2022, two Ukrainian Mi-24s reportedly entered Russia and attacked an oil storage facility in Belgorod. In May 2022, the Czech Republic donated Mi-24 helicopters to Ukraine. In July 2023, it was reported that Poland secretly donated at least a dozen Mi-24s to Ukraine. As of 29 August 2024, visually confirmed losses compiled by Oryx blog are listed as following: 4 Mi-24P, 4 Mi-24V/P/35M, 10 Mi-35M for the Russian side, and 2 Mi-24P and 5 Mi-24 of unknown variant for the Ukrainian side. Variants Operators Afghan Air Force - 8 Mi-25s as of 2021 Algerian Air Force - 30 Mi-24MKIIIs as of 2024 Angolan Air Force - 15 Mi-35 as of 2024 Armenian Air Force - 20 Mi-35 as of 2024 Azerbaijani Air Forces - 23 Mi-24Vs and 25 Mi-35s as of 2024 Belarus Air Force - 25 Mi-35s as of 2024 Bulgarian Air Force - 6 Mi-24V (6 Mi-24D Hind D in store) Burkina Faso Air Force - 25 Mi-35s as of 2023 National Defence Force (Burundi)- 2 Mi-35s as of 2012 Chadian Air Force - 3 Mi-35 as of 2024 Congolese Air Force - 1 Mi-35 as of 2024 Congolese Democratic Air Force - 8 Mi-35s as of 2024 Cuban Air Force - 4 Mi-35 as of 2024 Djibouti Air Force - 2 Mi-35 as of 2024 Egyptian Air Force - 13 Mi-24V as of 2024 Eritrean Air Force - 6 Mi-35 as of 2024 Ethiopian Air Force - 6 Mi-35 as of 2024 Georgian Air Force - 9 Mi-24 as of 2024 Guinean Air Force - 3 Mi-25 as of 2024 Hungarian Air Force - 6 Mi-24V and 2 Mi-24P Indian Air Force - 15 Mi-25/35 as of 2023 Indonesian Army - 7 Mi-35P Iraqi Army Aviation - 15 Mi-35 Military of Kazakhstan - 12 Mi-35M as of 2024 Military of Kyrgyzstan - 2 Mi-24V as of 2023 Libyan Air Force as of 2019 Air Force of Mali - 7 Mi-35M as of 2024 Mozambique Air Force - 2 Mi-25 as of 2023 Myanmar Air Force - 24 Mi-35P Namibian Air Force - 2 Mi-35 as of 2023 Air Force of Niger - 1 Mi-35 as of 2024 Nigerian Air Force - 15 Mi-35 as of 2024 Pakistan Army - 4 Mi-35M3 as of 2022 Peruvian Air Force - 16 Mi-35 as of 2024 Polish Land Forces - 16 Mi-24D/V Russian Aerospace Forces - 96 Mi-24D/V/P, 56 Mi-35P Russian Navy - 8 Mi-24P Border Service of Russia Rwandan Air Force - 5 Mi-35 as of 2024 Serbian Air Force 2 Mi-24, 4 Mi-35M Senegalese Air Force - 3 Mi-35 as of 2023 Sierra Leone Air Wing - 2 Mi-35 as of 2023 Sri Lanka Air Force - 9 Mi-35V Sudanese Air Force - 35 Mi-35 as of 2023 Syrian Air Force - 27 Mi-25 as of 2023 Tajik Air Force - 6 Mi-25 as of 2022 Military of Turkmenistan as of 2019 Ugandan Air Force - 6 Mi-35 as of 2024 Ukrainian Ground Forces - 45 Mi-24 United States Air Force - (used for aggressor training) Uzbekistan Air and Air Defence Forces - 33 Mi-35 Army of Venezuela - 9 Mi-35 Yemen Air Force - 14 Mi-35 as of 2024 Air Force of Zimbabwe - 6 Mi-35 as of 2024 Former operators Artsakh Defence Army Brazilian Air Force Croatian Air Force Cypriot National Guard – Sold to Serbia in November 2023 Czech Air Force – Retired and transferred to Ukraine in August 2023. Czechoslovakian Air Force Equatorial Guinean Air Force East German Air Force – transferred to Germany on reunification German Army – inherited from East Germany in 1990, retired 1993. Kampuchea Kampuchean People's Revolutionary Air Force Fuerza Aérea Sandinista Air Force of North Macedonia Slovakian Air Force People's Democratic Republic of Yemen Air Force Soviet Air Force – transferred to successor states Special Operations Unit Transnistria Air Force Vietnam People's Air Force Possible operators Korean People's Army Air and Anti-Air Force - possibly 20 Mi-35s as of 2024. May have none with claims traceable to an error by the Congressional Research Service. Aircraft on display Mi-24 helicopters can be seen in the following museums: Specifications (Mi-24) Popular culture The Mi-24 has appeared in several films and has been a common feature in many video games.
Technology
Specific aircraft
null