id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
59,217
https://en.wikipedia.org/wiki/Quadratic%20formula
In elementary algebra, the quadratic formula is a closed-form expression describing the solutions of a quadratic equation. Other ways of solving quadratic equations, such as completing the square, yield the same solutions. Given a general quadratic equation of the form , with representing an unknown, and coefficients , , and representing known real or complex numbers with , the values of satisfying the equation, called the roots or zeros, can be found using the quadratic formula, where the plus–minus symbol "" indicates that the equation has two roots. Written separately, these are: The quantity is known as the discriminant of the quadratic equation. If the coefficients , , and are real numbers then when , the equation has two distinct real roots; when , the equation has one repeated real root; and when , the equation has no real roots but has two distinct complex roots, which are complex conjugates of each other. Geometrically, the roots represent the values at which the graph of the quadratic function , a parabola, crosses the -axis: the graph's -intercepts. The quadratic formula can also be used to identify the parabola's axis of symmetry. Derivation by completing the square The standard way to derive the quadratic formula is to apply the method of completing the square to the generic quadratic equation . The idea is to manipulate the equation into the form for some expressions and written in terms of the coefficients; take the square root of both sides; and then isolate . We start by dividing the equation by the quadratic coefficient , which is allowed because is non-zero. Afterwards, we subtract the constant term to isolate it on the right-hand side: The left-hand side is now of the form , and we can "complete the square" by adding a constant to obtain a squared binomial . In this example we add to both sides so that the left-hand side can be factored (see the figure): Because the left-hand side is now a perfect square, we can easily take the square root of both sides: Finally, subtracting from both sides to isolate produces the quadratic formula: Equivalent formulations The quadratic formula can equivalently be written using various alternative expressions, for instance which can be derived by first dividing a quadratic equation by , resulting in , then substituting the new coefficients into the standard quadratic formula. Because this variant allows re-use of the intermediately calculated quantity , it can slightly reduce the arithmetic involved. Square root in the denominator A lesser known quadratic formula, first mentioned by Giulio Fagnano, describes the same roots via an equation with the square root in the denominator (assuming ): Here the minus–plus symbol "" indicates that the two roots of the quadratic equation, in the same order as the standard quadratic formula, are This variant has been jokingly called the "citardauq" formula ("quadratic" spelled backwards). When has the opposite sign as either or , subtraction can cause catastrophic cancellation, resulting in poor accuracy in numerical calculations; choosing between the version of the quadratic formula with the square root in the numerator or denominator depending on the sign of can avoid this problem. See below. This version of the quadratic formula is used in Muller's method for finding the roots of general functions. It can be derived from the standard formula from the identity , one of Vieta's formulas. Alternately, it can be derived by dividing each side of the equation by to get , applying the standard formula to find the two roots , and then taking the reciprocal to find the roots of the original equation. Other derivations Any generic method or algorithm for solving quadratic equations can be applied to an equation with symbolic coefficients and used to derive some closed-form expression equivalent to the quadratic formula. Alternative methods are sometimes simpler than completing the square, and may offer interesting insight into other areas of mathematics. Completing the square by Śrīdhara's method Instead of dividing by to isolate , it can be slightly simpler to multiply by instead to produce , which allows us to complete the square without need for fractions. Then the steps of the derivation are: Multiply each side by . Add to both sides to complete the square. Take the square root of both sides. Isolate . Applying this method to a generic quadratic equation with symbolic coefficients yields the quadratic formula: This method for completing the square is ancient and was known to the 8th–9th century Indian mathematician Śrīdhara. Compared with the modern standard method for completing the square, this alternate method avoids fractions until the last step and hence does not require a rearrangement after step 3 to obtain a common denominator in the right side. By substitution Another derivation uses a change of variables to eliminate the linear term. Then the equation takes the form in terms of a new variable and some constant expression , whose roots are then . By substituting into , expanding the products and combining like terms, and then solving for , we have: Finally, after taking a square root of both sides and substituting the resulting expression for back into the familiar quadratic formula emerges: By using algebraic identities The following method was used by many historical mathematicians: Let the roots of the quadratic equation be and . The derivation starts from an identity for the square of a difference (valid for any two complex numbers), of which we can take the square root on both sides: Since the coefficient , we can divide the quadratic equation by to obtain a monic polynomial with the same roots. Namely, This implies that the sum and the product . Thus the identity can be rewritten: Therefore, The two possibilities for each of and are the same two roots in opposite order, so we can combine them into the standard quadratic equation: By Lagrange resolvents An alternative way of deriving the quadratic formula is via the method of Lagrange resolvents, which is an early part of Galois theory. This method can be generalized to give the roots of cubic polynomials and quartic polynomials, and leads to Galois theory, which allows one to understand the solution of algebraic equations of any degree in terms of the symmetry group of their roots, the Galois group. This approach focuses on the roots themselves rather than algebraically rearranging the original equation. Given a monic quadratic polynomial assume that and are the two roots. So the polynomial factors as which implies and . Since multiplication and addition are both commutative, exchanging the roots and will not change the coefficients and : one can say that and are symmetric polynomials in and . Specifically, they are the elementary symmetric polynomials – any symmetric polynomial in and can be expressed in terms of and instead. The Galois theory approach to analyzing and solving polynomials is to ask whether, given coefficients of a polynomial each of which is a symmetric function in the roots, one can "break" the symmetry and thereby recover the roots. Using this approach, solving a polynomial of degree is related to the ways of rearranging ("permuting") terms, called the symmetric group on letters and denoted . For the quadratic polynomial, the only ways to rearrange two roots are to either leave them be or to transpose them, so solving a quadratic polynomial is simple. To find the roots and , consider their sum and difference: These are called the Lagrange resolvents of the polynomial, from which the roots can be recovered as Because is a symmetric function in and , it can be expressed in terms of and specifically as described above. However, is not symmetric, since exchanging and yields the additive inverse . So cannot be expressed in terms of the symmetric polynomials. However, its square is symmetric in the roots, expressible in terms of and . Specifically , which implies . Taking the positive root "breaks" the symmetry, resulting in from which the roots and are recovered as which is the quadratic formula for a monic polynomial. Substituting , yields the usual expression for an arbitrary quadratic polynomial. The resolvents can be recognized as respectively the vertex and the discriminant of the monic polynomial. A similar but more complicated method works for cubic equations, which have three resolvents and a quadratic equation (the "resolving polynomial") relating and , which one can solve by the quadratic equation, and similarly for a quartic equation (degree 4), whose resolving polynomial is a cubic, which can in turn be solved. The same method for a quintic equation yields a polynomial of degree 24, which does not simplify the problem, and, in fact, solutions to quintic equations in general cannot be expressed using only roots. Numerical calculation The quadratic formula is exactly correct when performed using the idealized arithmetic of real numbers, but when approximate arithmetic is used instead, for example pen-and-paper arithmetic carried out to a fixed number of decimal places or the floating-point binary arithmetic available on computers, the limitations of the number representation can lead to substantially inaccurate results unless great care is taken in the implementation. Specific difficulties include catastrophic cancellation in computing the sum if ; catastrophic calculation in computing the discriminant itself in cases where ; degeneration of the formula when , , or is represented as zero or infinite; and possible overflow or underflow when multiplying or dividing extremely large or small numbers, even in cases where the roots can be accurately represented. Catastrophic cancellation occurs when two numbers which are approximately equal are subtracted. While each of the numbers may independently be representable to a certain number of digits of precision, the identical leading digits of each number cancel, resulting in a difference of lower relative precision. When , evaluation of causes catastrophic cancellation, as does the evaluation of when . When using the standard quadratic formula, calculating one of the two roots always involves addition, which preserves the working precision of the intermediate calculations, while calculating the other root involves subtraction, which compromises it. Therefore, naïvely following the standard quadratic formula often yields one result with less relative precision than expected. Unfortunately, introductory algebra textbooks typically do not address this problem, even though it causes students to obtain inaccurate results in other school subjects such as introductory chemistry. For example, if trying to solve the equation using a pocket calculator, the result of the quadratic formula might be approximately calculated as: Even though the calculator used ten decimal digits of precision for each step, calculating the difference between two approximately equal numbers has yielded a result for with only four correct digits. One way to recover an accurate result is to use the identity . In this example can be calculated as , which is correct to the full ten digits. Another more or less equivalent approach is to use the version of the quadratic formula with the square root in the denominator to calculate one of the roots (see above). Practical computer implementations of the solution of quadratic equations commonly choose which formula to use for each root depending on the sign of . These methods do not prevent possible overflow or underflow of the floating-point exponent in computing or , which can lead to numerically representable roots not being computed accurately. A more robust but computationally expensive strategy is to start with the substitution , turning the quadratic equation into where is the sign function. Letting , this equation has the form , for which one solution is and the other solution is . The roots of the original equation are then and . With additional complication the expense and extra rounding of the square roots can be avoided by approximating them as powers of two, while still avoiding exponent overflow for representable roots. Historical development The earliest methods for solving quadratic equations were geometric. Babylonian cuneiform tablets contain problems reducible to solving quadratic equations. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation. The Greek mathematician Euclid (circa 300 BC) used geometric methods to solve quadratic equations in Book 2 of his Elements, an influential mathematical treatise Rules for quadratic equations appear in the Chinese The Nine Chapters on the Mathematical Art circa 200 BC. In his work Arithmetica, the Greek mathematician Diophantus (circa 250 AD) solved quadratic equations with a method more recognizably algebraic than the geometric algebra of Euclid. His solution gives only one root, even when both roots are positive. The Indian mathematician Brahmagupta included a generic method for finding one root of a quadratic equation in his treatise Brāhmasphuṭasiddhānta (circa 628 AD), written out in words in the style of the time but more or less equivalent to the modern symbolic formula. His solution of the quadratic equation was as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value." In modern notation, this can be written . The Indian mathematician Śrīdhara (8th–9th century) came up with a similar algorithm for solving quadratic equations in a now-lost work on algebra quoted by Bhāskara II. The modern quadratic formula is sometimes called Sridharacharya's formula in India and Bhaskara's formula in Brazil. The 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī solved quadratic equations algebraically. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published La Géométrie containing special cases of the quadratic formula in the form we know today. Geometric significance In terms of coordinate geometry, an axis-aligned parabola is a curve whose -coordinates are the graph of a second-degree polynomial, of the form , where , , and are real-valued constant coefficients with . Geometrically, the quadratic formula defines the points on the graph, where the parabola crosses the -axis. Furthermore, it can be separated into two terms, The first term describes the axis of symmetry, the line . The second term, , gives the distance the roots are away from the axis of symmetry. If the parabola's vertex is on the -axis, then the corresponding equation has a single repeated root on the line of symmetry, and this distance term is zero; algebraically, the discriminant . If the discriminant is positive, then the vertex is not on the -axis but the parabola opens in the direction of the -axis, crossing it twice, so the corresponding equation has two real roots. If the discriminant is negative, then the parabola opens in the opposite direction, never crossing the -axis, and the equation has no real roots; in this case the two complex-valued roots will be complex conjugates whose real part is the value of the axis of symmetry. Dimensional analysis If the constants , , and/or are not unitless then the quantities and must have the same units, because the terms and agree on their units. By the same logic, the coefficient must have the same units as , irrespective of the units of . This can be a powerful tool for verifying that a quadratic expression of physical quantities has been set up correctly. See also Fundamental theorem of algebra Vieta's formulas Notes References Elementary algebra Equations
Quadratic formula
[ "Mathematics" ]
3,160
[ "Mathematical objects", "Equations", "Elementary algebra", "Elementary mathematics", "Algebra" ]
59,220
https://en.wikipedia.org/wiki/Base%20%28topology%29
In mathematics, a base (or basis; : bases) for the topology of a topological space is a family of open subsets of such that every open set of the topology is equal to the union of some sub-family of . For example, the set of all open intervals in the real number line is a basis for the Euclidean topology on because every open interval is an open set, and also every open subset of can be written as a union of some family of open intervals. Bases are ubiquitous throughout topology. The sets in a base for a topology, which are called , are often easier to describe and use than arbitrary open sets. Many important topological definitions such as continuity and convergence can be checked using only basic open sets instead of arbitrary open sets. Some topologies have a base of open sets with specific useful properties that may make checking such topological definitions easier. Not all families of subsets of a set form a base for a topology on . Under some conditions detailed below, a family of subsets will form a base for a (unique) topology on , obtained by taking all possible unions of subfamilies. Such families of sets are very frequently used to define topologies. A weaker notion related to bases is that of a subbase for a topology. Bases for topologies are also closely related to neighborhood bases. Definition and basic properties Given a topological space , a base (or basis) for the topology (also called a base for if the topology is understood) is a family of open sets such that every open set of the topology can be represented as the union of some subfamily of . The elements of are called basic open sets. Equivalently, a family of subsets of is a base for the topology if and only if and for every open set in and point there is some basic open set such that . For example, the collection of all open intervals in the real line forms a base for the standard topology on the real numbers. More generally, in a metric space the collection of all open balls about points of forms a base for the topology. In general, a topological space can have many bases. The whole topology is always a base for itself (that is, is a base for ). For the real line, the collection of all open intervals is a base for the topology. So is the collection of all open intervals with rational endpoints, or the collection of all open intervals with irrational endpoints, for example. Note that two different bases need not have any basic open set in common. One of the topological properties of a space is the minimum cardinality of a base for its topology, called the weight of and denoted . From the examples above, the real line has countable weight. If is a base for the topology of a space , it satisfies the following properties: (B1) The elements of cover , i.e., every point belongs to some element of . (B2) For every and every point , there exists some such that . Property (B1) corresponds to the fact that is an open set; property (B2) corresponds to the fact that is an open set. Conversely, suppose is just a set without any topology and is a family of subsets of satisfying properties (B1) and (B2). Then is a base for the topology that it generates. More precisely, let be the family of all subsets of that are unions of subfamilies of Then is a topology on and is a base for . (Sketch: defines a topology because it is stable under arbitrary unions by construction, it is stable under finite intersections by (B2), it contains by (B1), and it contains the empty set as the union of the empty subfamily of . The family is then a base for by construction.) Such families of sets are a very common way of defining a topology. In general, if is a set and is an arbitrary collection of subsets of , there is a (unique) smallest topology on containing . (This topology is the intersection of all topologies on containing .) The topology is called the topology generated by , and is called a subbase for . The topology can also be characterized as the set of all arbitrary unions of finite intersections of elements of . (See the article about subbase.) Now, if also satisfies properties (B1) and (B2), the topology generated by can be described in a simpler way without having to take intersections: is the set of all unions of elements of (and is a base for in that case). There is often an easy way to check condition (B2). If the intersection of any two elements of is itself an element of or is empty, then condition (B2) is automatically satisfied (by taking ). For example, the Euclidean topology on the plane admits as a base the set of all open rectangles with horizontal and vertical sides, and a nonempty intersection of two such basic open sets is also a basic open set. But another base for the same topology is the collection of all open disks; and here the full (B2) condition is necessary. An example of a collection of open sets that is not a base is the set of all semi-infinite intervals of the forms and with . The topology generated by contains all open intervals , hence generates the standard topology on the real line. But is only a subbase for the topology, not a base: a finite open interval does not contain any element of (equivalently, property (B2) does not hold). Examples The set of all open intervals in forms a basis for the Euclidean topology on . A non-empty family of subsets of a set that is closed under finite intersections of two or more sets, which is called a -system on , is necessarily a base for a topology on if and only if it covers . By definition, every σ-algebra, every filter (and so in particular, every neighborhood filter), and every topology is a covering -system and so also a base for a topology. In fact, if is a filter on then is a topology on and is a basis for it. A base for a topology does not have to be closed under finite intersections and many are not. But nevertheless, many topologies are defined by bases that are also closed under finite intersections. For example, each of the following families of subset of is closed under finite intersections and so each forms a basis for some topology on : The set of all bounded open intervals in generates the usual Euclidean topology on . The set of all bounded closed intervals in generates the discrete topology on and so the Euclidean topology is a subset of this topology. This is despite the fact that is not a subset of . Consequently, the topology generated by , which is the Euclidean topology on , is coarser than the topology generated by . In fact, it is strictly coarser because contains non-empty compact sets which are never open in the Euclidean topology. The set of all intervals in such that both endpoints of the interval are rational numbers generates the same topology as . This remains true if each instance of the symbol is replaced by . generates a topology that is strictly coarser than the topology generated by . No element of is open in the Euclidean topology on . generates a topology that is strictly coarser than both the Euclidean topology and the topology generated by . The sets and are disjoint, but nevertheless is a subset of the topology generated by . Objects defined in terms of bases The order topology on a totally ordered set admits a collection of open-interval-like sets as a base. In a metric space the collection of all open balls forms a base for the topology. The discrete topology has the collection of all singletons as a base. A second-countable space is one that has a countable base. The Zariski topology on the spectrum of a ring has a base consisting of open sets that have specific useful properties. For the usual base for this topology, every finite intersection of basic open sets is a basic open set. The Zariski topology of is the topology that has the algebraic sets as closed sets. It has a base formed by the set complements of algebraic hypersurfaces. The Zariski topology of the spectrum of a ring (the set of the prime ideals) has a base such that each element consists of all prime ideals that do not contain a given element of the ring. Theorems A topology is finer than a topology if and only if for each and each basic open set of containing , there is a basic open set of containing and contained in . If are bases for the topologies then the collection of all set products with each is a base for the product topology In the case of an infinite product, this still applies, except that all but finitely many of the base elements must be the entire space. Let be a base for and let be a subspace of . Then if we intersect each element of with , the resulting collection of sets is a base for the subspace . If a function maps every basic open set of into an open set of , it is an open map. Similarly, if every preimage of a basic open set of is open in , then is continuous. is a base for a topological space if and only if the subcollection of elements of which contain form a local base at , for any point . Base for the closed sets Closed sets are equally adept at describing the topology of a space. There is, therefore, a dual notion of a base for the closed sets of a topological space. Given a topological space a family of closed sets forms a base for the closed sets if and only if for each closed set and each point not in there exists an element of containing but not containing A family is a base for the closed sets of if and only if its in that is the family of complements of members of , is a base for the open sets of Let be a base for the closed sets of Then For each the union is the intersection of some subfamily of (that is, for any not in there is some containing and not containing ). Any collection of subsets of a set satisfying these properties forms a base for the closed sets of a topology on The closed sets of this topology are precisely the intersections of members of In some cases it is more convenient to use a base for the closed sets rather than the open ones. For example, a space is completely regular if and only if the zero sets form a base for the closed sets. Given any topological space the zero sets form the base for the closed sets of some topology on This topology will be the finest completely regular topology on coarser than the original one. In a similar vein, the Zariski topology on An is defined by taking the zero sets of polynomial functions as a base for the closed sets. Weight and character We shall work with notions established in . Fix X a topological space. Here, a network is a family of sets, for which, for all points x and open neighbourhoods U containing x, there exists B in for which Note that, unlike a basis, the sets in a network need not be open. We define the weight, w(X), as the minimum cardinality of a basis; we define the network weight, nw(X), as the minimum cardinality of a network; the character of a point, as the minimum cardinality of a neighbourhood basis for x in X; and the character of X to be The point of computing the character and weight is to be able to tell what sort of bases and local bases can exist. We have the following facts: nw(X) ≤ w(X). if X is discrete, then w(X) = nw(X) = |X|. if X is Hausdorff, then nw(X) is finite if and only if X is finite discrete. if B is a basis of X then there is a basis of size if N a neighbourhood basis for x in X then there is a neighbourhood basis of size if is a continuous surjection, then nw(Y) ≤ w(X). (Simply consider the Y-network for each basis B of X.) if is Hausdorff, then there exists a weaker Hausdorff topology so that So a fortiori, if X is also compact, then such topologies coincide and hence we have, combined with the first fact, nw(X) = w(X). if a continuous surjective map from a compact metrizable space to an Hausdorff space, then Y is compact metrizable. The last fact follows from f(X) being compact Hausdorff, and hence (since compact metrizable spaces are necessarily second countable); as well as the fact that compact Hausdorff spaces are metrizable exactly in case they are second countable. (An application of this, for instance, is that every path in a Hausdorff space is compact metrizable.) Increasing chains of open sets Using the above notation, suppose that w(X) ≤ κ some infinite cardinal. Then there does not exist a strictly increasing sequence of open sets (equivalently strictly decreasing sequence of closed sets) of length ≥ κ+. To see this (without the axiom of choice), fix as a basis of open sets. And suppose per contra, that were a strictly increasing sequence of open sets. This means For we may use the basis to find some Uγ with x in Uγ ⊆ Vα. In this way we may well-define a map, f : κ+ → κ mapping each α to the least γ for which Uγ ⊆ Vα and meets This map is injective, otherwise there would be α < β with f(α) = f(β) = γ, which would further imply Uγ ⊆ Vα but also meets which is a contradiction. But this would go to show that κ+ ≤ κ, a contradiction. See also Esenin-Volpin's theorem Gluing axiom Neighbourhood system Notes References Bibliography General topology
Base (topology)
[ "Mathematics" ]
2,856
[ "General topology", "Topology" ]
59,226
https://en.wikipedia.org/wiki/Chevron%20%28insignia%29
A chevron (also spelled cheveron, especially in older documents) is a V-shaped mark or symbol, often inverted. The word is usually used in reference to a kind of fret in architecture, or to a badge or insignia used in military or police uniforms to indicate rank or length of service, or in heraldry and the designs of flags (see flag terminology). Ancient history Appearing on pottery and petrographs throughout the ancient world, the chevron can be considered to be one of the oldest symbols in human history, with V-shaped markings occurring as early as the Neolithic era (6th to 5th millennia BC) as part of the Vinča symbols inventory. The Vinča culture responsible for the symbols appear to have used the chevron as part of a larger proto-writing system rather than any sort of heraldic or decorative use, and are not known to have passed the symbol on to any subsequent cultures. Many comparatively recent examples appear from approximately 1800 BC onward, beginning as part of an archaeological recovery of pottery designs from the palace of Knossos on Crete in the modern day country of Greece. Furthermore the Nubian Kingdom of Kerma produced pottery with decorative repertoire confined to geometric designs such as chevrons. Heraldry A chevron is one of the heraldic ordinaries, the simple geometrical figures which are the foundation of many coats of arms. A chevron is constructed by choosing a visually appealing angle such as the golden angle or any other angle the artist prefers. It can be subject to a number of modifications including inversion. When the ends are cut off in a way that looks like the splintered ends of a broken piece of wood, with an irregular zig-zag pattern, it is called éclaté. When shown as a smaller size than standard, it is a diminutive called a chevronel. Chevrons appeared early in the history of heraldry, especially in Normandy. In Scandinavia the chevron is known as sparre; an early example appears in the arms of Armand Desmondly. Rank insignia In Western European tradition, chevrons are used as an insignia of the ranks variously known, depending on the country, as non-commissioned officer or sub-officer ranks. This usage has become the worldwide norm, but there are many exceptions where other insignia, typically stripes but sometimes stars, are used for such ranks instead. Many countries, such as France and Italy, use chevrons proper, or colloquially, the chevrons "point up". Many others, such as most Commonwealth countries, use inverted chevrons, or colloquially, the chevrons "point down." In the United States, the Army and Marines use chevrons proper (although prior to the 20th century this was not true), while the Air Force, Navy, and Coast Guard use inverted chevrons. Arcs, known as "rockers", are also added to chevrons to indicate higher rank. English-speaking countries tend to use three chevrons for a sergeant and two for a corporal. Canadian and Australian Forces often refer to chevrons as "hooks". In the Dutch armed forces they are nicknamed "banana peels". In the British Army, Royal Marines and Royal Air Force, chevrons are worn point down to denote non-commissioned officer rank, with one for lance corporal, two for corporal, three for sergeant, and three with a crown for staff sergeant (known as colour sergeant in infantry regiments and the Royal Marines) or flight sergeant (RAF). Branch and tradition results in variations in rank titles (corporal of horse being the equivalent of sergeant in the Household Cavalry) and spellings (serjeant in The Rifles). Large chevrons are also worn on the sleeves of Royal Navy sailors to denote good conduct rather than rank. Although usually associated with non-commissioned officers, the chevron was originally used as an insignia to denote general officer ranks in the British Army. It was adopted from the insignia worn by cavalry during the 18th century, in particular the Household Cavalry. It was worn on the cuffs, forearms and tails of their coats, embroidered in gold bullion for the guards and silver for dragoons regiments. George III favoured the uniform of the Horse Guards, and his Windsor uniform followed a similar pattern. After 1768, a similar pattern uniform as worn the King was introduced to general officers, with the number and spacing of the chevrons denoting rank. For example, a major general would wear his chevrons in pairs: two on the sleeves, and two on the tails. A lieutenant general would wear them in groups of three, and a full general's would be equidistant. This practice continued into the early Victorian era. Examples Vexillology In vexillology, a chevron is a triangle on the hoist of a flag. The chevron is used in several national flags, such as the flag of Cuba, the flag of the Czech Republic, the flag of Jordan, the flag of Equatorial Guinea and the flag of the Philippines. Other uses as insignia In some armies, small chevrons are worn on the lower left sleeve to indicate length of service, akin to service stripes in the U.S. military. The Israel Defense Forces use chevrons in various orientations as organizational designators on their vehicles, specifically which company within a battalion they belong to. NATO armed forces use the "Λ" chevron as insignia to represent the alliance between different armies, during peacekeeping missions. The US-led coalition that took part in Operation Desert Storm used a black "Λ" chevron in a similar manner as NATO forces use it on their ground vehicles. The design was created by a soldier in 3AD after the US military sought markings to identify coalition vehicles due to increased fratricide incidents. Its symbolism, according to the artist SGT Grzywa, was meant to be a V for Victory, a tribute to WWII Coalition Forces. "V" chevrons were historically used as the insignia of the Russian Volunteer Army during the Russian Civil War, and in modern times as one of the military insignia by Russian forces during the 2022 Russian invasion of Ukraine, and Russian civilians have used it in support of their government. French car maker Citroën uses a double chevron as its logo. Chevrons on their side are also used as road signs to denote bends. From the early 1950s until the early 2000s, Simplex, Faraday and many other companies manufactured the chevron series fire alarm manual pull station. The handle was shaped in a way where the handle looked like an inverted chevron. See also Arrow (symbol) Circumflex, a chevron-shaped diacritical mark Caron/haček, a diacritical mark known as "inverted chevron" References External links Heraldic ordinaries Ornaments Military heraldry Military insignia Architectural elements fr:Liste de pièces héraldiques#Chevron pt:Chevron (heráldica)
Chevron (insignia)
[ "Technology", "Engineering" ]
1,414
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
59,231
https://en.wikipedia.org/wiki/Exception%20handling
In computing and computer programming, exception handling is the process of responding to the occurrence of exceptions – anomalous or exceptional conditions requiring special processing – during the execution of a program. In general, an exception breaks the normal flow of execution and executes a pre-registered exception handler; the details of how this is done depend on whether it is a hardware or software exception and how the software exception is implemented. Exceptions are defined by different layers of a computer system, and the typical layers are CPU-defined interrupts, operating system (OS)-defined signals, programming language-defined exceptions. Each layer requires different ways of exception handling although they may be interrelated, e.g. a CPU interrupt could be turned into an OS signal. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted. Definition The definition of an exception is based on the observation that each procedure has a precondition, a set of circumstances for which it will terminate "normally". An exception handling mechanism allows the procedure to raise an exception if this precondition is violated, for example if the procedure has been called on an abnormal set of arguments. The exception handling mechanism then handles the exception. The precondition, and the definition of exception, is subjective. The set of "normal" circumstances is defined entirely by the programmer, e.g. the programmer may deem division by zero to be undefined, hence an exception, or devise some behavior such as returning zero or a special "ZERO DIVIDE" value (circumventing the need for exceptions). Common exceptions include an invalid argument (e.g. value is outside of the domain of a function), an unavailable resource (like a missing file, a network drive error, or out-of-memory errors), or that the routine has detected a normal condition that requires special handling, e.g., attention, end of file. Social pressure is a major influence on the scope of exceptions and use of exception-handling mechanisms, i.e. "examples of use, typically found in core libraries, and code examples in technical books, magazine articles, and online discussion forums, and in an organization’s code standards". Exception handling solves the semipredicate problem, in that the mechanism distinguishes normal return values from erroneous ones. In languages without built-in exception handling such as C, routines would need to signal the error in some other way, such as the common return code and errno pattern. Taking a broad view, errors can be considered to be a proper subset of exceptions, and explicit error mechanisms such as errno can be considered (verbose) forms of exception handling. The term "exception" is preferred to "error" because it does not imply that anything is wrong - a condition viewed as an error by one procedure or programmer may not be viewed that way by another. The term "exception" may be misleading because its connotation of "anomaly" indicates that raising an exception is abnormal or unusual, when in fact raising the exception may be a normal and usual situation in the program. For example, suppose a lookup function for an associative array throws an exception if the key has no value associated. Depending on context, this "key absent" exception may occur much more often than a successful lookup. History The first hardware exception handling was found in the UNIVAC I from 1951. Arithmetic overflow executed two instructions at address 0 which could transfer control or fix up the result. Software exception handling developed in the 1960s and 1970s. Exception handling was subsequently widely adopted by many programming languages from the 1980s onward. Hardware exceptions There is no clear consensus as to the exact meaning of an exception with respect to hardware. From the implementation point of view, it is handled identically to an interrupt: the processor halts execution of the current program, looks up the interrupt handler in the interrupt vector table for that exception or interrupt condition, saves state, and switches control. IEEE 754 floating-point exceptions Exception handling in the IEEE 754 floating-point standard refers in general to exceptional conditions and defines an exception as "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application. That operation might signal one or more exceptions by invoking the default or, if explicitly requested, a language-defined alternate handling." By default, an IEEE 754 exception is resumable and is handled by substituting a predefined value for different exceptions, e.g. infinity for a divide by zero exception, and providing status flags for later checking of whether the exception occurred (see C99 programming language for a typical example of handling of IEEE 754 exceptions). An exception-handling style enabled by the use of status flags involves: first computing an expression using a fast, direct implementation; checking whether it failed by testing status flags; and then, if necessary, calling a slower, more numerically robust, implementation. The IEEE 754 standard uses the term "trapping" to refer to the calling of a user-supplied exception-handling routine on exceptional conditions, and is an optional feature of the standard. The standard recommends several usage scenarios for this, including the implementation of non-default pre-substitution of a value followed by resumption, to concisely handle removable singularities. The default IEEE 754 exception handling behaviour of resumption following pre-substitution of a default value avoids the risks inherent in changing flow of program control on numerical exceptions. For example, the 1996 Cluster spacecraft launch ended in a catastrophic explosion due in part to the Ada exception handling policy of aborting computation on arithmetic error. William Kahan claims the default IEEE 754 exception handling behavior would have prevented this. In programming languages In user interfaces Front-end web development frameworks, such as React and Vue, have introduced error handling mechanisms where errors propagate up the user interface (UI) component hierarchy, in a way that is analogous to how errors propagate up the call stack in executing code. Here the error boundary mechanism serves as an analogue to the typical try-catch mechanism. Thus a component can ensure that errors from its child components are caught and handled, and not propagated up to parent components. For example, in Vue, a component would catch errors by implementing errorCapturedVue.component('parent', { template: '<div><slot></slot></div>', errorCaptured: (err, vm, info) => alert('An error occurred'); }) Vue.component('child', { template: '<div>{{ cause_error() }}</div>' })When used like this in markup:<parent> <child></child> </parent>The error produced by the child component is caught and handled by the parent component. See also Triple fault Data validation References External links A Crash Course on the Depths of Win32 Structured Exception Handling by Matt Pietrek - Microsoft Systems Journal (1997) Article "C++ Exception Handling" by Christophe de Dinechin Article "Exceptional practices" by Brian Goetz Article "Object Oriented Exception Handling in Perl" by Arun Udaya Shankar Article "Programming with Exceptions in C++" by Kyle Loudon Article "Unchecked Exceptions - The Controversy" Conference slides Floating-Point Exception-Handling policies (pdf p. 46) by William Kahan Descriptions from Portland Pattern Repository Does Java Need Checked Exceptions? Control flow Software anomalies
Exception handling
[ "Technology" ]
1,556
[ "Computer errors", "Technological failures", "Software anomalies" ]
59,243
https://en.wikipedia.org/wiki/Amphibole
Amphibole ( ) is a group of inosilicate minerals, forming prism or needlelike crystals, composed of double chain tetrahedra, linked at the vertices and generally containing ions of iron and/or magnesium in their structures. Its IMA symbol is Amp. Amphiboles can be green, black, colorless, white, yellow, blue, or brown. The International Mineralogical Association currently classifies amphiboles as a mineral supergroup, within which are two groups and several subgroups. Mineralogy Amphiboles crystallize into two crystal systems, monoclinic and orthorhombic. In chemical composition and general characteristics they are similar to the pyroxenes. The chief differences from pyroxenes are that (i) amphiboles contain essential hydroxyl (OH) or halogen (F, Cl) and (ii) the basic structure is a double chain of tetrahedra (as opposed to the single chain structure of pyroxene). Most apparent, in hand specimens, is that amphiboles form oblique cleavage planes (at around 120 degrees), whereas pyroxenes have cleavage angles of approximately 90 degrees. Amphiboles are also specifically less dense than the corresponding pyroxenes. Amphiboles are the primary constituent of amphibolites. Structure Like pyroxenes, amphiboles are classified as inosilicate (chain silicate) minerals. However, the pyroxene structure is built around single chains of silica tetrahedra while amphiboles are built around double chains of silica tetrahedra. In other words, as with almost all silicate minerals, each silicon ion is surrounded by four oxygen ions. In amphiboles, some of the oxygen ions are shared between silicon ions to form a double chain structure as depicted below. These chains extend along the [001] axis of the crystal. One side of each chain has apical oxygen ions, shared by only one silicon ion, and pairs of double chains are bound to each other by metal ions that connect apical oxygen ions. The pairs of double chains have been likened to I-beams. Each I-beam is bonded to its neighbor by additional metal ions to form the complete crystal structure. Large gaps in the structure may be empty or partially filled by large metal ions, such as sodium, but remain points of weakness that help define the cleavage planes of the crystal. In rocks Amphiboles are minerals of either igneous or metamorphic origin. Amphiboles are more common in intermediate to felsic igneous rocks than in mafic igneous rocks, because the higher silica and dissolved water content of the more evolved magmas favors formation of amphiboles rather than pyroxenes. The highest amphibole content, around 20%, is found in andesites. Hornblende is widespread in igneous and metamorphic rocks and is particularly common in syenites and diorites. Calcium is sometimes a constituent of naturally occurring amphiboles. Amphiboles of metamorphic origin include those developed in limestones by contact metamorphism (tremolite) and those formed by the alteration of other ferromagnesian minerals (such as hornblende as an alteration product of pyroxene). Pseudomorphs of amphibole after pyroxene are known as uralite. History and etymology The name amphibole derives from Greek (, ), implying ambiguity. The name was used by to include tremolite, actinolite and hornblende. The group was so named by Haüy in allusion to the protean variety, in composition and appearance, assumed by its minerals. This term has since been applied to the whole group. Numerous sub-species and varieties are distinguished, the more important of which are tabulated below in two series. The formulae of each will be seen to be built on the general double-chain silicate formula RSi4O11. Four of the amphibole minerals are commonly called asbestos. These are: anthophyllite, riebeckite, the cummingtonite/grunerite series, and the actinolite/tremolite series. The cummingtonite/grunerite series is often termed amosite or "brown asbestos", and riebeckite is known as crocidolite or "blue asbestos". These are generally called amphibole asbestos. Mining, manufacture and prolonged use of these minerals can cause serious illnesses. Mineral species The more common amphiboles are classified as shown in the following table: Other species Orthorhombic series Holmquistite, Li2Mg3Al2Si8O22(OH)2 Monoclinic series Pargasite, NaCa2Mg3Fe2+Si6Al3O22(OH)2 Winchite, (CaNa)Mg4(Al,Fe3+)Si8O22(OH)2 Edenite, NaCa2Mg5(Si7Al)O22(OH)2 Series Certain amphibole minerals form solid solution series, at least at elevated temperature. Ferrous iron usually substitutes freely for magnesium in amphiboles to form continuous solid solution series between magnesium-rich and iron-rich endmembers. These include the cummington (magnesium) to grunerite (iron) endmembers, where the dividing line is placed at 30% magnesium. In addition, the orthoamphiboles, anthophyllite and gedrite, which differ in their aluminium content, form a continuous solid solution at elevated temperature. As the amphibole cools, the two end members exsolve to form very thin layers (lamellae). Hornblende is highly variable in composition, and includes at least five solid solution series: magnesiohornblende-ferrohornblende (), tschermakite-ferrotschermakite (), edenite-ferroedenite (), pargasite-ferropargasite () and magnesiohastingstite-hastingsite (). In addition, titanium, manganese, or chromium can substitute for some of the cations and oxygen, fluorine, or chlorine for some of the hydroxide. The different chemical types are almost impossible to distinguish even by optical or X-ray methods, and detailed chemical analysis using an electron microprobe is required. Glaucophane to riebeckite form yet another solid solution series, which also extends towards hornblende and arfvedsonite. There is not a continuous series between calcic clinoamphiboles, such as hornblende, and low-calcium amphiboles, such as orthoamphiboles or the cummingtonite-grunerite series. Compositions intermediate in calcium are almost nonexistent in nature. However, there is a solid solution series between hornblende and tremolite-actinolite at elevated temperature. A miscibility gap exists at lower temperatures, and, as a result, hornblende often contains exsolution lamellae of grunerite. Descriptions On account of the wide variations in chemical composition, the different members vary considerably in properties and general appearance. Anthophyllite occurs as brownish, fibrous or lamellar masses with hornblende in mica-schist at Kongsberg in Norway and some other localities. An aluminous related species is known as gedrite and a deep green Russian variety containing little iron as kupfferite. Hornblende is an important constituent of many igneous rocks. It is also an important constituent of amphibolites formed by metamorphism of basalt. Actinolite is an important and common member of the monoclinic series, forming radiating groups of acicular crystals of a bright green or greyish-green color. It occurs frequently as a constituent of greenschists. The name (from Greek ἀκτίς, ἀκτῖνος/aktís, aktînos, a 'ray' and λίθος/líthos, a 'stone') is a translation of the old German word Strahlstein (radiated stone). Glaucophane, crocidolite, riebeckite and arfvedsonite form a somewhat special group of alkali-amphiboles. The first two are blue fibrous minerals, with glaucophane occurring in blueschists and crocidolite (blue asbestos) in ironstone formations, both resulting from dynamo-metamorphic processes. The latter two are dark green minerals, which occur as original constituents of igneous rocks rich in sodium, such as nepheline-syenite and phonolite. Pargasite is a rare magnesium-rich variety of hornblende with essential sodium, usually found in ultramafic rocks. For instance, it occurs in uncommon mantle xenoliths, carried up by kimberlite. It is hard, dense, black and usually automorphic, with a red-brown pleochroism in petrographic thin section. See also List of minerals Classification of silicate minerals References Inosilicates Asbestos
Amphibole
[ "Environmental_science" ]
1,946
[ "Toxicology", "Asbestos" ]
59,338
https://en.wikipedia.org/wiki/Bracket
A bracket is either of two tall fore- or back-facing punctuation marks commonly used to isolate a segment of text or data from its surroundings. They come in four main pairs of shapes, as given in the box to the right, which also gives their names, that vary between British and American English. "Brackets", without further qualification, are in British English the ... marks and in American English the ... marks. Other minor bracket shapes exist, such as (for example) slash or diagonal brackets used by linguists to enclose phonemes. Brackets are typically deployed in symmetric pairs, and an individual bracket may be identified as a 'left' or 'right' bracket or, alternatively, an "opening bracket" or "closing bracket", respectively, depending on the directionality of the context. In casual writing and in technical fields such as computing or linguistic analysis of grammar, brackets nest, with segments of bracketed material containing embedded within them other further bracketed sub-segments. The number of opening brackets matches the number of closing brackets in such cases. Various forms of brackets are used in mathematics, with specific mathematical meanings, often for denoting specific mathematical functions and subformulas. History Angle brackets or chevrons ⟨ ⟩ were the earliest type of bracket to appear in written English. Erasmus coined the term to refer to the round brackets or parentheses () recalling the shape of the crescent moon (). Most typewriters only had the left and right parentheses. Square brackets appeared with some teleprinters. Braces (curly brackets) first became part of a character set with the 8-bit code of the IBM 7030 Stretch. In 1961, ASCII contained parentheses, square, and curly brackets, and also less-than and greater-than signs that could be used as angle brackets. Typography In English, typographers mostly prefer not to set brackets in italics, even when the enclosed text is italic. However, in other languages like German, if brackets enclose text in italics, they are usually also set in italics. Parentheses or round brackets and are parentheses (singular parenthesis ) in American English, and either round brackets or simply brackets in British English. They are also known as "parens" , "circle brackets", or "smooth brackets". In formal writing, "parentheses" is also used in British English. Uses of ( ) Parentheses contain adjunctive material that serves to clarify (in the manner of a gloss) or is aside from the main point. A comma before or after the material can also be used, though if the sentence contains commas for other purposes, visual confusion may result. A dash before and after the material is also sometimes used. Parentheses may be used in formal writing to add supplementary information, such as "Senator John McCain (R - Arizona) spoke at length". They can also indicate shorthand for "either singular or plural" for nouns, e.g. "the claim(s)". It can also be used for gender-neutral language, especially in languages with grammatical gender, e.g. "(s)he agreed with his/her physician" (the slash in the second instance, as one alternative is replacing the other, not adding to it). Parenthetical phrases have been used extensively in informal writing and stream of consciousness literature. Examples include the southern American author William Faulkner (see Absalom, Absalom! and the Quentin section of The Sound and the Fury) as well as poet E. E. Cummings. Parentheses have historically been used where the em dash is currently used in alternatives, such as "parenthesis)(parentheses". Examples of this usage can be seen in editions of Fowler's Dictionary of Modern English Usage. Parentheses may be nested (generally with one set (such as this) inside another set). This is not commonly used in formal writing (though sometimes other brackets [especially square brackets] will be used for one or more inner set of parentheses [in other words, secondary {or even tertiary} phrases can be found within the main parenthetical sentence]). Language A parenthesis in rhetoric and linguistics refers to the entire bracketed text, not just to the enclosing marks used (so all the text in this set of round brackets may be described as "a parenthesis"). Taking as an example the sentence "Mrs. Pennyfarthing (What? Yes, that was her name!) was my landlady.", the explanatory phrase between the parentheses is itself called a parenthesis. Again, the parenthesis implies that the meaning and flow of the bracketed phrase is supplemental to the rest of the text and the whole would be unchanged were the parenthesized sentences removed. The term refers to the syntax rather than the enclosure method: the same clause in the form "Mrs. PennyfarthingWhat? Yes, that was her name!was my landlady" is also a parenthesis. (In non-specialist usage, the term "parenthetical phrase" is more widely understood.) In phonetics, parentheses are used for indistinguishable or unidentified utterances. They are also seen for silent articulation (mouthing), where the expected phonetic transcription is derived from lip-reading, and with periods to indicate silent pauses, for example or . Enumerations An unpaired right parenthesis is often used as part of a label in an ordered list, such as this one: Accounting Traditionally in accounting, contra amounts are placed in parentheses. A debit balance account in a series of credit balances will have parenthesis and vice versa. Parentheses in mathematics Parentheses are used in mathematical notation to indicate grouping, often inducing a different order of operations. For example: in the usual order of algebraic operations, equals 14, since the multiplication is done before the addition. However, equals 20, because the parentheses override normal precedence, causing the addition to be done first. Some authors follow the convention in mathematical equations that, when parentheses have one level of nesting, the inner pair are parentheses and the outer pair are square brackets. Example: Parentheses in programming languages Parentheses are included in the syntaxes of many programming languages. Typically needed to denote an argument; to tell the compiler what data type the Method/Function needs to look for first in order to initialise. In some cases, such as in LISP, parentheses are a fundamental construct of the language. They are also often used for scoping functions and operators and for arrays. In syntax diagrams they are used for grouping, such as in extended Backus–Naur form. In Mathematica and the Wolfram language, parentheses are used to indicate groupingfor example, with pure anonymous functions. Taxonomy If it is desired to include the subgenus when giving the scientific name of an animal species or subspecies, the subgenus's name is provided in parentheses between the genus name and the specific epithet. For instance, Polyphylla (Xerasiobia) alba is a way to cite the species Polyphylla alba while also mentioning that it is in the subgenus Xerasiobia. There is also a convention of citing a subgenus by enclosing it in parentheses after its genus, e.g., Polyphylla (Xerasiobia) is a way to refer to the subgenus Xerasiobia within the genus Polyphylla. Parentheses are similarly used to cite a subgenus with the name of a prokaryotic species, although the International Code of Nomenclature of Prokaryotes (ICNP) requires the use of the abbreviation "subgen". as well, e.g., Acetobacter (subgen. Gluconoacetobacter) liquefaciens. Chemistry Parentheses are used in chemistry to denote a repeated substructure within a molecule, e.g. HC(CH3)3 (isobutane) or, similarly, to indicate the stoichiometry of ionic compounds with such substructures: e.g. Ca(NO3)2 (calcium nitrate). This is a notation that was pioneered by Berzelius, who wanted chemical formulae to more resemble algebraic notation, with brackets enclosing groups that could be multiplied (e.g. in 3(AlO2 + 2SO3) the 3 multiplies everything within the parentheses). In chemical nomenclature, parentheses are used to distinguish structural features and multipliers for clarity, for example in the polymer poly(methyl methacrylate). Square brackets and are square brackets in both British and American English, but are also more simply brackets in the latter. An older name for these brackets is "crotchets". Uses of [ ] Square brackets are often used to insert explanatory material or to mark where a [word or] passage was omitted from an original material by someone other than the original author, or to mark modifications in quotations. In transcribed interviews, sounds, responses and reactions that are not words but that can be described are set off in square brackets — "... [laughs] ...". When quoted material is in any way altered, the alterations are enclosed in square brackets within the quotation to show that the quotation is not exactly as given, or to add an annotation. For example: The Plaintiff asserted his cause is just, stating, In the original quoted sentence, the word "my" was capitalized: it has been modified in the quotation given and the change signalled with brackets. Similarly, where the quotation contained a grammatical error (is/are), the quoting author signalled that the error was in the original with "[sic]" (Latin for 'thus'). A bracketed ellipsis, [...], is often used to indicate omitted material: "I'd like to thank [several unimportant people] for their tolerance [...]" Bracketed comments inserted into a quote indicate where the original has been modified for clarity: "I appreciate it [the honor], but I must refuse", and "the future of psionics [see definition] is in doubt". Or one can quote the original statement "I hate to do laundry" with a (sometimes grammatical) modification inserted: He "hate[s] to do laundry". Additionally, a small letter can be replaced by a capital one, when the beginning of the original printed text is being quoted in another piece of text or when the original text has been omitted for succinctness— for example, when referring to a verbose original: "To the extent that policymakers and elite opinion in general have made use of economic analysis at all, they have, as the saying goes, done so the way a drunkard uses a lamppost: for support, not illumination", can be quoted succinctly as: "[P]olicymakers [...] have made use of economic analysis [...] the way a drunkard uses a lamppost: for support, not illumination." When nested parentheses are needed, brackets are sometimes used as a substitute for the inner pair of parentheses within the outer pair. When deeper levels of nesting are needed, convention is to alternate between parentheses and brackets at each level. Alternatively, empty square brackets can also indicate omitted material, usually single letter only. The original, "Reading is also a process and it also changes you." can be rewritten in a quote as: It has been suggested that reading can "also change[] you". In translated works, brackets are used to signify the same word or phrase in the original language to avoid ambiguity. For example: He is trained in the way of the open hand [karate]. Style and usage guides originating in the news industry of the twentieth century, such as the AP Stylebook, recommend against the use of square brackets because "They cannot be transmitted over news wires." However, this guidance has little relevance outside of the technological constraints of the industry and era. In linguistics, phonetic transcriptions are generally enclosed within square brackets, whereas phonemic transcriptions typically use paired slashes, according to International Phonetic Alphabet rules. Pipes (| |) are often used to indicate a morphophonemic rather than phonemic representation. Other conventions are double slashes (⫽ ⫽), double pipes (‖ ‖) and curly brackets ({ }). In lexicography, square brackets usually surround the section of a dictionary entry which contains the etymology of the word the entry defines. Proofreading Brackets (called move-left symbols or move right symbols) are added to the sides of text in proofreading to indicate changes in indentation: Square brackets are used to denote parts of the text that need to be checked when preparing drafts prior to finalizing a document. Law Square brackets are used in some countries in the citation of law reports to identify parallel citations to non-official reporters. For example: In some other countries (such as England and Wales), square brackets are used to indicate that the year is part of the citation and parentheses are used to indicate the year the judgment was given. For example: This case is in the 1954 volume of the Appeal Cases reports, although the decision may have been given in 1953 or earlier. Compare with: This citation reports a decision from 1954, in volume 98 of the Solicitors Journal which may be published in 1955 or later. They often denote points that have not yet been agreed to in legal drafts and the year in which a report was made for certain case law decisions. Square brackets in mathematics Brackets are used in mathematics in a variety of notations, including standard notations for commutators, the floor function, the Lie bracket, equivalence classes, the Iverson bracket, and matrices. Square brackets may be used exclusively or in combination with parentheses to represent intervals as interval notation. For example, represents the set of real numbers from 0 to 5 inclusive. Both parentheses and brackets are used to denote a half-open interval; would be the set of all real numbers between 5 and 12, including 5 but not 12. The numbers may come as close as they like to 12, including 11.999 and so forth, but 12.0 is not included. In some European countries, the notation is also used. The endpoint adjoining the square bracket is known as closed, whereas the endpoint adjoining the parenthesis is known as open. In group theory and ring theory, brackets denote the commutator. In group theory, the commutator is commonly defined as . In ring theory, the commutator is defined as . Chemistry Square brackets can also be used in chemistry to represent the concentration of a chemical substance in solution and to denote charge a Lewis structure of an ion (particularly distributed charge in a complex ion), repeating chemical units (particularly in polymers) and transition state structures, among other uses. Square brackets in programming languages Brackets are used in many computer programming languages, primarily for array indexing. But they are also used to denote general tuples, sets and other structures, just as in mathematics. There may be several other uses as well, depending on the language at hand. In syntax diagrams they are used for optional portions, such as in extended Backus–Naur form. Double brackets ⟦ ⟧ Double brackets (or white square brackets or Scott brackets), ⟦ ⟧, are used to indicate the semantic evaluation function in formal semantics for natural language and denotational semantics for programming languages. In the Wolfram Language, double brackets, either as iterated single brackets () or ligatures (〚) are used for list indexing. The brackets stand for a function that maps a linguistic expression to its "denotation" or semantic value. In mathematics, double brackets may also be used to denote intervals of integers or, less often, the floor function. In papyrology, following the Leiden Conventions, they are used to enclose text that has been deleted in antiquity. Lenticular brackets【】 Some East Asian languages use lenticular brackets , a combination of square brackets and round brackets called (fāngtóu kuòhào) in Chinese and (sumitsuki kakko) in Japanese. They are used in titles and headings in both Chinese and Japanese. On the Internet, they are used to emphasize a text. In Japanese, they are most frequently seen in dictionaries for quoting Chinese characters and Sino-Japanese loanwords. Floor ⌊ ⌋ and ceiling ⌈ ⌉ corner brackets The floor corner brackets and , the ceiling corner brackets and (U+2308, U+2309) are used to denote the integer floor and ceiling functions. Quine corners ⌜⌝ and half brackets ⸤ ⸥ or ⸢ ⸣ The Quine corners and have at least two uses in mathematical logic: either as quasi-quotation, a generalization of quotation marks, or to denote the Gödel number of the enclosed expression. Half brackets are used in English to mark added text, such as in translations: "Bill saw ⸤her⸥". In editions of papyrological texts, half brackets, ⸤ and ⸥ or ⸢ and ⸣, enclose text which is lacking in the papyrus due to damage, but can be restored by virtue of another source, such as an ancient quotation of the text transmitted by the papyrus. For example, Callimachus Iambus 1.2 reads: ἐκ τῶν ὅκου βοῦν κολλύ⸤βου π⸥ιπρήσκουσιν. A hole in the papyrus has obliterated βου π, but these letters are supplied by an ancient commentary on the poem. Second intermittent sources can be between ⸢ and ⸣. Quine corners are sometimes used instead of half brackets. Brackets with quills ⁅ ⁆ Known as "spike parentheses" (), and are used in Swedish bilingual dictionaries to enclose supplemental constructions. Curly brackets and are curly brackets or braces in both American and British English. Uses of { } Curly brackets are used by text editors to mark editorial insertions or interpolations. Braces used to be used to connect multiple lines of poetry, such as triplets in a poem of rhyming couplets, although this usage had gone out of fashion by the 19th century. Another older use in prose was to eliminate duplication in lists and tables. Two examples here from Charles Hutton's 19th century table of weights and measures in his A Course of Mathematics: As an extension to the International Phonetic Alphabet (IPA), braces are used for prosodic notation. Music In music, they are known as "accolades" or "braces", and connect two or more lines (staves) of music that are played simultaneously. Chemistry The use of braces in chemistry is an old notation that has long since been superseded by subscripted numbers. The chemical formula for water, H2O, was represented as . Curly brackets in programming languages In many programming languages, curly brackets enclose groups of statements and create a local scope. Such languages (C, C#, C++ and many others) are therefore called curly bracket languages. They are also used to define structures and enumerated type in these languages. In various Unix shells, they enclose a group of strings that are used in a process known as brace expansion, where each successive string in the group is interpolated at that point in the command line to generate the command-line's final form. The mechanism originated in the C shell and the string generation mechanism is a simple interpolation that can occur anywhere in a command line and takes no account of existing filenames. In syntax diagrams they are used for repetition, such as in extended Backus–Naur form. In the Z formal specification language, braces define a set. Curly brackets in mathematics In mathematics they delimit sets, in what is called set notation. Braces enclose either a literal list of set elements, or a rule that defines the set elements. For example: defines a set containing  and . defines a set containing elements (implied to be numbers) , , and so on where every satisfies the rule that it is greater than zero. They are often also used to denote the Poisson bracket between two quantities. In ring theory, braces denote the anticommutator where is defined as . Angle brackets and are angle brackets in both American and British English. In (largely archaic) computer slang, they are sometimes known as "brokets". Strictly speaking they are distinct from V-shaped chevrons, as they have (where the typography permits it) a broader span than chevrons, although when printed often no visual distinction is made. The ASCII less-than and greater-than characters are often used for angle brackets. In most cases only those characters are accepted by computer programs, and the Unicode angle brackets are not recognized (for instance, in HTML tags). The characters for "single" guillemets are also often used, and sometimes normal guillemets when nested angle brackets are needed. The angle brackets or chevrons at U+27E8 and U+27E9 are for mathematical use and Western languages, whereas U+3008 and U+3009 are for East Asian languages. The chevrons at U+2329 and U+232A are deprecated in favour of the U+3008 and U+3009 East Asian angle brackets. Unicode discourages their use for mathematics and in Western texts, because they are canonically equivalent to the CJK code points U+300x and thus likely to render as double-width symbols. The less-than and greater-than symbols are often used as replacements for chevrons. Shape Angle brackets are larger than less-than and greater-than signs, which in turn are larger than guillemets. Uses of ⟨ ⟩ Angle brackets are infrequently used to denote words that are thought instead of spoken, such as: In textual criticism, and hence in many editions of pre-modern works, chevrons denote sections of the text which are illegible or otherwise lost; the editor will often insert their own reconstruction where possible within them. In comic books, chevrons are often used to mark dialogue that has been translated notionally from another language; in other words, if a character is speaking another language, instead of writing in the other language and providing a translation, one writes the translated text within chevrons. Since no foreign language is actually written, this is only notionally translated. In linguistics, angle brackets identify graphemes (, letters of an alphabet) or orthography, as in "The English word is spelled ." In epigraphy, they may be used for mechanical transliterations of a text into the Latin script. In East Asian punctuation, angle brackets are used as quotation marks. Chevron-like symbols are part of standard Chinese, Japanese and less frequently Korean punctuation, where they generally enclose the titles of books, as: 〈 ︙ 〉 or 《 ︙ 》 for traditional vertical printing — written in vertical lines — and as 〈 ... 〉 or 《 ... 》 for horizontal printing — in horizontal. Angle brackets in mathematics Angle brackets (or 'chevrons') are used in group theory to write group presentations, and to denote the subgroup generated by a collection of elements. In set theory, chevrons or parentheses are used to denote ordered pairs and other tuples, whereas curly brackets are used for unordered sets. Physics and mechanics In physical sciences and statistical mechanics, angle brackets are used to denote an average (expected value) over time or over another continuous parameter. For example: In mathematical physics, especially quantum mechanics, it is common to write the inner product between elements as , as a short version of , or , where is an operator. This is known as Dirac notation or bra–ket notation, to note vectors from the dual spaces of the Bra . But there are other notations used. In continuum mechanics, chevrons may be used as Macaulay brackets. Angle brackets in programming languages In C++ chevrons (actually less-than and greater-than) are used to surround arguments to templates. They are also used to surround the names of header files; this usage was inherited from and is also found in C. In the Z formal specification language, chevrons define a sequence. In HTML, chevrons (actually 'greater than' and 'less than' symbols) are used to bracket meta text. For example denotes that the following text should be displayed as bold. Pairs of meta text tags are required – much as brackets themselves are usually in pairs. The end of the bold text segment would be indicated by . This use is sometimes extended as an informal mechanism for communicating mood or tone in digital formats such as messaging, for example adding "<sighs>" at the end of a sentence. Unicode Representations of various kinds of brackets in Unicode and their respective HTML entities, that are not in the infoboxes in preceding sections, are given below. See also Bracket (mathematics) International variation in quotation marks Emoticon Japanese typographic symbols Order of operations Triple parentheses References Sources States that what are depicted as brackets above are called braces and braces are called brackets. This was the terminology in US printing prior to computers. External links Punctuation Mathematical notation
Bracket
[ "Mathematics" ]
5,226
[ "nan" ]
59,348
https://en.wikipedia.org/wiki/Question%20mark
The question mark (also known as interrogation point, query, or eroteme in journalism) is a punctuation mark that indicates a question or interrogative clause or phrase in many languages. History In the fifth century, Syriac Bible manuscripts used question markers, according to a 2011 theory by manuscript specialist Chip Coakley: he believes the zagwa elaya ("upper pair"), a vertical double dot over a word at the start of a sentence, indicates that the sentence is a question. From around 783, in Godescalc Evangelistary, a mark described as "a lightning flash, striking from right to left" is attested. This mark is later called a . According to some paleographers, it may have indicated intonation, perhaps associated with early musical notation like neumes. Another theory, is that the "lightning flash" was originally a tilde or titlo, as in , one of many wavy or more or less slanted marks used in medieval texts for denoting things such as abbreviations, which would later become various diacritics or ligatures. From the 10th century, the pitch-defining element (if it ever existed) seems to have been gradually forgotten, so that the "lightning flash" sign (with the stroke sometimes slightly curved) is often seen indifferently at the end of clauses, whether they embody a question or not. In the early 13th century, when the growth of communities of scholars (universities) in Paris and other major cities led to an expansion and streamlining of the book-production trade, punctuation was rationalized by assigning the "lightning flash" specifically to interrogatives; by this time the stroke was more sharply curved and can easily be recognized as the modern question mark. (See, for example, (1496) printed by Aldo Manuzio in Venice.) In 1598, the English term point of interrogation is attested in an Italian–English dictionary by John Florio. In the 1850s, the term question mark is attested: Scope In English, the question mark typically occurs at the end of a sentence, where it replaces the full stop (period). However, the question mark may also occur at the end of a clause or phrase, where it replaces the comma : "Is it good in form? style? meaning?" or: "Showing off for him, for all of them, not out of hubris—hubris? him? what did he have to be hubrid about?—but from mood and nervousness." — Stanley Elkin. This is quite common in Spanish, where the use of bracketing question marks explicitly indicates the scope of interrogation. ('In case you cannot go with them, would you like to go with us?') A question mark may also appear immediately after questionable data, such as dates: Genghis Khan (1162?–1227) In other languages and scripts Opening and closing question marks in Spanish In Spanish, since the second edition of the of the in 1754, interrogatives require both opening and closing question marks. An interrogative sentence, clause, or phrase begins with an inverted question mark and ends with the question mark , as in: – 'She asks me, "What time is it? Question marks must always be matched, but to mark uncertainty rather than actual interrogation omitting the opening one is allowed, although discouraged: is preferred in Spanish over The omission of the opening mark is common in informal writing, but is considered an error. The one exception is when the question mark is matched with an exclamation mark, as in: – 'Who do you think you are?!' (The order may also be reversed, opening with a question mark and closing with an exclamation mark.) Nonetheless, even here the recommends matching punctuation: The opening question mark in Unicode is . In other languages of Spain Galician also uses the inverted opening question mark, though usually only in long sentences or in cases that would otherwise be ambiguous. Basque and Catalan, however, use only the terminal question mark. Solomon Islands Pidgin In Solomon Islands Pidgin, the question can be between question marks since, in yes/no questions, the intonation can be the only difference. ('Solomon Islands is a great country, isn't it?') Armenian question mark In Armenian, the question mark is a diacritic that takes the form of an open circle and is placed over the stressed vowel of the question word. It is defined in Unicode at . Greek question mark The Greek question mark () looks like . It appeared around the same time as the Latin one, in the 8th century. It was adopted by Church Slavonic and eventually settled on a form essentially similar to the Latin semicolon. In Unicode, it is separately encoded as , but the similarity is so great that the code point is normalised to , making the marks identical in practice. Mirrored question mark in right-to-left scripts In Arabic and other languages that use Arabic script such as Persian, Urdu and Uyghur (Arabic form), which are written from right to left, the question mark is mirrored right-to-left from the Latin question mark. In Unicode, two encodings are available: and . Some browsers may display the character in the previous sentence as a forward question mark due to font or text directionality issues. In addition, the Thaana script in Dhivehi uses the mirrored question mark: މަރުހަބާ؟ The Arabic question mark is also used in some other right-to-left scripts: N'Ko, Syriac and Adlam. Adlam also has : , 'No?'. Hebrew script is also written right-to-left, but it uses a question mark that appears on the page in the same orientation as the left-to-right question mark (e.g. ). Fullwidth question mark in East Asian languages The question mark is also used in modern writing in Chinese and, to a lesser extent, Japanese. Usually, it is written as fullwidth form in Chinese and Japanese, in Unicode: . Fullwidth form is always preferred in official usage. In Korean language, however, halfwidth is used. Japanese has an interrogative particle, か (ka), which functions grammatically like a question mark. Therefore, the question mark is not historically used Japanese, and still not officially sanctioned for use in government publications or school textbooks, but its popularity has been gradually increasing among younger people. Where official usage is , some people would now informally write to express "It may be over"; the question mark here adds a nuance of uncertainty to the sentence rather than turning it into a question. Chinese also has a spoken indicator of questions, which is 吗 (ma). However, the question mark should always be used after when asking questions. In other scripts Some other scripts have a specific question mark: , and Stylistic variants French orthography specifies a narrow non-breaking space before the question mark. (e.g., ""); in English orthography, no space appears in front of the question mark (e.g. "What would you like to drink?"). Typological variants of ? The rhetorical question mark or percontation point (see Irony punctuation) was invented by Henry Denham in the 1580s and was used at the end of a rhetorical question; however, it became obsolete in the 17th century. It was the reverse of an ordinary question mark, so that instead of the main opening pointing back into the sentence, it opened away from it. This character can be represented using . Bracketed question marks can be used for rhetorical questions, for example , in informal contexts such as closed captioning. The question mark can also be used as a meta-sign to signal uncertainty regarding what precedes it. It is usually put between brackets: . The uncertainty may concern either a superficial level (such as unsure spelling), or a deeper truth (real meaning). In typography, some other variants and combinations are available: "⁇," "⁈," and "⁉," are usually used for chess annotation symbols; the interrobang, "‽," is used to combine the functions of the question mark and the exclamation mark, superposing these two marks. Unicode makes available these variants: with an emoji variation selector Computing In computing, the question mark character is represented by ASCII code 63 (0x3F hexadecimal), and is located at Unicode code-point . The full-width (double-byte) equivalent (?), is located at code-point . The inverted question mark (¿) corresponds to Unicode code-point , and can be accessed from the keyboard in Microsoft Windows on the default US layout by holding down the Alt and typing either 1 6 8 (ANSI) or 0 1 9 1 (Unicode) on the numeric keypad. In GNOME applications on Linux operating systems, it can be entered by typing the hexadecimal Unicode character (minus leading zeros) while holding down both Ctrl and Shift, i.e.: Ctrl Shift B F. In recent XFree86 and X.Org incarnations of the X Window System, it can be accessed as a compose sequence of two straight question marks, i.e. pressing Compose ? ? yields ¿. In classic Mac OS and Mac OS X (macOS), the key combination Option Shift ? produces an inverted question mark. In shell and scripting languages, the question mark is often utilized as a wildcard character: a symbol that can be used to substitute for any other character or characters in a string. In particular, filename globbing uses "?" as a substitute for any one character, as opposed to the asterisk, "*", which matches zero or more characters in a string. The question mark is used in ASCII renderings of the International Phonetic Alphabet, such as SAMPA, in place of the glottal stop symbol, , (which resembles "?" without the dot), and corresponds to Unicode code point . In computer programming, the symbol "?" has a special meaning in many programming languages. In C-descended languages, ? is part of the ?: operator, which is used to evaluate simple boolean conditions. In C# 2.0, the ? modifier is used to handle nullable data types and ?? is the null coalescing operator. In the POSIX syntax for regular expressions, such as that used in Perl and Python, ? stands for "zero or one instance of the previous subexpression", i.e. an optional element. It can also make a quantifier like {x,y}, + or * match as few characters as possible, making it lazy, e.g. /^.*?px/ will match the substring 165px in 165px 17px instead of matching 165px 17px. In certain implementations of the BASIC programming language, the ? character may be used as a shorthand for the "print" function; in others (notably the BBC BASIC family), ? is used to address a single-byte memory location. In OCaml, the question mark precedes the label for an optional parameter. In Scheme, as a convention, symbol names ending in ? are used for predicates, such as odd?, null?, and eq?. Similarly, in Ruby, method names ending in ? are used for predicates. In Swift a type followed by ? denotes an option type; ? is also used in "optional chaining", where if an option value is nil, it ignores the following operations. Similarly, in Kotlin, a type followed by ? is nullable and functions similar to option chaining are supported. In APL, ? generates random numbers or a random subset of indices. In Rust, a ? suffix on a function or method call indicates error handling. In SPARQL, the question mark is used to introduce variable names, such as ?name. In MUMPS, it is the pattern match operator. In many Web browsers and other computer programs, when converting text between encodings, it may not be possible to map some characters into the target character set. In this situation it is common to replace each unmappable character with a question mark ?, inverted question mark ¿, or the Unicode replacement character, usually rendered as a white question mark in a black diamond: . This commonly occurs for apostrophes and quotation marks when they are written with software that uses its own proprietary non-standard code for these characters, such as Microsoft Office's "smart quotes". The generic URL syntax allows for a query string to be appended to a resource location in a Web address so that additional information can be passed to a script; the query mark, ?, is used to indicate the start of a query string. A query string is usually made up of a number of different field/value pairs, each separated by the ampersand symbol, &, as seen in this URL: http://www.example.com/search.php?query=testing&database=English Here, a script on the page search.php on the server www.example.com is to provide a response to the query string containing the pairs query=testing and database=English. Games In algebraic chess notation, some chess punctuation conventions include: "?" denotes a bad move, "??" a blunder, "?!" a dubious move, and "!?" an interesting move. In Scrabble, a question mark indicates a blank tile. Linguistics In most areas of linguistics, but especially in syntax, a question mark in front of a word, phrase or sentence indicates that the form in question is strongly dispreferred, "questionable" or "strange", but not outright ungrammatical. (The asterisk is used to indicate outright ungrammaticality.) Other sources go further and use several symbols (e.g. the question mark and the asterisk plus or the degree symbol ) to indicate gradations or a continuum of acceptability. Yet others use double question marks to indicate a degree of strangeness between those indicated by a single question mark and that indicated by the combination of question mark and asterisk. Mathematics and formal logic In mathematics, "?" commonly denotes Minkowski's question mark function. In linear logic, the question mark denotes one of the exponential modalities that control weakening and contraction. When placed above the relational symbol in an equation or inequality, a question-mark annotation means that the stated relation is "questioned". This can be used to ask whether the relation might be true or to point out the relation's possible invalidity. Medicine A question mark is used in English medical notes to suggest a possible diagnosis. It facilitates the recording of a doctor's impressions regarding a patient's symptoms and signs. For example, for a patient presenting with left lower abdominal pain, a differential diagnosis might include ?diverticulitis (read as "query diverticulitis"). See also Cosmic "Question Mark" ('upspeak', 'uptalk') List of typographical symbols and punctuation marks Notes References Bibliography External links – provides an overview of question mark usage, and the differences between direct, indirect, and rhetorical questions. Interrogative words and phrases Punctuation Typographical symbols
Question mark
[ "Mathematics" ]
3,229
[ "Symbols", "Typographical symbols" ]
59,349
https://en.wikipedia.org/wiki/Quotation%20marks%20in%20English
In English writing, quotation marks or inverted commas, also known informally as quotes, talking marks, speech marks, quote marks, quotemarks or speechmarks, are punctuation marks placed on either side of a word or phrase in order to identify it as a quotation, direct speech or a literal title or name. Quotation marks may be used to indicate that the meaning of the word or phrase they surround should be taken to be different from (or, at least, a modification of) that typically associated with it, and are often used in this way to express irony (for example, in the sentence 'The lunch lady plopped a glob of "food" onto my tray.' the quotation marks around the word food show it is being called that ironically). They are also sometimes used to emphasise a word or phrase, although this is usually considered incorrect. Quotation marks are written as a pair of opening and closing marks in either of two styles: or . Opening and closing quotation marks may be identical in form (called neutral, vertical, straight, typewriter, or "dumb" quotation marks), or may be distinctly left-handed and right-handed (typographic or, colloquially, curly quotation marks); . Typographic quotation marks are usually used in manuscript and typeset text. Because typewriter and computer keyboards lack keys to directly enter typographic quotation marks, much of typed writing has neutral quotation marks. Some computer software has the feature often called "smart quotes" which can, sometimes imperfectly, convert neutral quotation marks to typographic ones. The typographic closing double quotation mark and the neutral double quotation mark are similar to and sometimes stand in for the ditto mark and the double prime symbol. Likewise, the typographic opening single quotation mark is sometimes used to represent the ʻokina while either the typographic closing single quotation mark or the neutral single quotation mark may represent the prime symbol. Characters with different meanings are typically given different visual appearance in typefaces that recognize these distinctions, and they each have different Unicode code points. Despite being semantically different, the typographic closing single quotation mark and the typographic apostrophe have the same visual appearance and code point (U+2019), as do the neutral single quote and typewriter apostrophe (U+0027). (Despite the different code points, the curved and straight versions are sometimes considered multiple glyphs of the same character.) History In the first centuries of typesetting, quotations were distinguished merely by indicating the speaker, and this can still be seen in some editions of the Christian Bible. During the Renaissance, quotations were distinguished by setting in a typeface contrasting with the main body text (often italic type with roman, or the other way around). Long quotations were also set this way, at full size and full measure. Quotation marks were first cut in metal type during the middle of the sixteenth century, and were used copiously by some printers by the seventeenth. In some Baroque and Romantic-period books, they would be repeated at the beginning of every line of a long quotation. When this practice was abandoned, the empty margin remained, leaving the modern form of indented block quotation. In Early Modern English, quotation marks were used to denote pithy comments. They were used to quote direct speech as early as the late sixteenth century, and this practice became more common over time. Usage Quotations and speech Single or double quotation marks denote either speech or a quotation. Double quotes are preferred in the United States, and also tend to be preferred in Australia (though the Australian Government prefers single quotes) and Canada. Single quotes are more usual in the United Kingdom, Ireland and South Africa, though double quotes are also common there, especially in . In New Zealand, both styles are used. A publisher's or author's style may take precedence over regional general preferences. The important idea is that the style of opening and closing quotation marks must be matched: For speech within speech, the other style is used as inner quotation marks: Sometimes quotations are nested in more levels than inner and outer quotation. Nesting levels up to five can be found in the Christian Bible. In these cases, questions arise about the form (and names) of the quotation marks to be used. The most common way is to simply alternate between the two forms, thus: If such a passage is further quoted in another publication, then all of their forms have to be shifted up by one level. In many cases, quotations that span multiple paragraphs are set as block quotations, and thus do not require quotation marks. However, quotation marks are used for multiple-paragraph quotations in some cases, especially in narratives, where the convention in English is to give opening quotation marks to the first and each subsequent paragraph, using closing quotation marks only for the final paragraph of the quotation, as in the following example from Pride and Prejudice: As noted above, in some older texts, the quotation mark is repeated every line, rather than every paragraph. When quoted text is interrupted, such as with the phrase he said, a closing quotation mark is used before the interruption, and an opening quotation mark after. Commas are also often used before and after the interruption, more often for quotations of speech than for quotations of text: Quotation marks are not used for indirect speech. This is because indirect speech can be a paraphrase; it is not a direct quote, and in the course of any composition, it is important to document when one is using a quotation versus when one is just giving content, which may be paraphrased, and which could be open to interpretation. For example, if Hal says: "All systems are functional", then, in indirect speech: Irony Another common use of quotation marks is to indicate or call attention to ironic, dubious, or non-standard words: Quotes indicating verbal irony, or other special use, are sometimes called scare quotes. They are sometimes gestured in oral speech using air quotes, or indicated in speech with a tone change or by replacement with supposed[ly] or so-called. Signalling unusual usage Quotation marks are also used to indicate that the writer realises that a word is not being used in its current commonly accepted sense: In addition to conveying a neutral attitude and to call attention to a neologism, or slang, or special terminology (also known as jargon), quoting can also indicate words or phrases that are descriptive but unusual, colloquial, folksy, startling, humorous, metaphoric, or contain a pun: Dawkins's concept of a meme could be described as an "evolving idea". People also use quotation marks in this way to distance the writer from the terminology in question so as not to be associated with it, for example to indicate that a quoted word is not official terminology, or that a quoted phrase presupposes things that the author does not necessarily agree with; or to indicate special terminology that should be identified for accuracy's sake as someone else's terminology, as when a term (particularly a controversial term) pre-dates the writer or represents the views of someone else, perhaps without judgement (contrast this neutrally distancing quoting to the negative use of scare quotes). The Chicago Manual of Style, 17th edition (2017), acknowledges this type of use but, in section 7.57, cautions against its overuse: "Quotation marks are often used to alert readers that a term is used in a nonstandard (or slang), ironic, or other special sense .... [T]hey imply 'This is not my term,' or 'This is not how the term is usually applied.' Like any such device, scare quotes lose their force and irritate readers if overused." Use–mention distinction Either quotation marks or italic type can emphasise that an instance of a word refers to the word itself rather than its associated concept. In linguistics Precise writing about language often uses italics for the word itself and single quotation marks for a gloss, with the two not separated by a comma or other punctuation, and with strictly logical quotation around the gloss – extraneous terminal punctuation outside the quotation marks – even in North American publications, which might otherwise prefer them inside: Titles of artistic works Quotation marks, rather than italics, are generally used for the titles of shorter works. Whether these are single or double depends on the context; however, many styles, especially for poetry, prefer the use of single quotation marks. Short fiction, poetry, etc.: Arthur C. Clarke's "The Sentinel" Book chapters: The first chapter of 3001: The Final Odyssey is "Comet Cowboy" Articles in books, magazines, journals, etc.: "Extra-Terrestrial Relays", Wireless World, October 1945 Album tracks, singles, etc.: David Bowie's "Space Oddity" As a rule, the title of a whole publication is italicised (or, in typewritten text, underlined), whereas the titles of minor works within or a subset of the larger publication (such as poems, short stories, named chapters, journal papers, newspaper articles, TV show episodes, video game levels, editorial sections of websites, etc.) are written with quotation marks. Nicknames and false titles Quotation marks can also set off a nickname embedded in an actual name, or a false or ironic title embedded in an actual title; for example, Nat "King" Cole, Frank "Chairman of the Board" Sinatra, or Simone Rizzo "Sam the Plumber" DeCavalcante. Nonstandard usage Quotes are sometimes used for emphasis in lieu of underlining or italics, most commonly on signs or placards. This usage can be confused with ironic or altered-usage quotation, sometimes with unintended humor. For example, For sale: "fresh" fish, "fresh" oysters, could be construed to imply that fresh is not used with its everyday meaning, or indeed to indicate that the fish or oysters are anything but fresh. As another example, Cashiers' desks open until noon for your "convenience" could be interpreted to mean that the convenience was for the bank employees, not the customers. Order of punctuation With regard to quotation marks adjacent to periods and commas, there are two styles of punctuation in widespread use. These two styles are most commonly referred to as "American" and "British", or sometimes "typesetters' quotation" and "logical quotation". Both systems have the same rules regarding question marks, exclamation points, colons, and semicolons. However, they differ in the treatment of periods and commas. In all major forms of English, question marks, exclamation marks, semicolons, and any other punctuation (with the possible exceptions of periods and commas, as explained in the sections below) are placed inside or outside the closing quotation mark depending on whether they are part of the quoted material. A convention is the use of square brackets to indicate content between the quotation marks that has been modified from, or was not present in, the original material. British style The prevailing style in the United Kingdom called British style, logical quotation, and logical punctuation is to include within quotation marks only those punctuation marks that appeared in the original quoted material and in which the punctuation mark fits with the sense of the quotation, but otherwise to place punctuation outside the closing quotation marks. Fowler's A Dictionary of Modern English Usage provides an early example of the rule: "All signs of punctuation used with words in quotation marks must be placed according to the sense." When dealing with words-as-words, short-form works and sentence fragments, this style places periods and commas outside the quotation marks: When dealing with direct speech, according to the British style guide Butcher's Copy-editing, if a quotation is broken by words of the main sentence, and then resumed, the punctuation before the break should follow the closing quote unless it forms part of the quotation. An exception may be made when writing fiction, where the first comma may be placed before the first closing quote. In non-fiction, some British publishers may permit placing punctuation that is not part of the person's speech inside the quotation marks but prefer that it be placed outside. Periods and commas that part of the person's speech are permitted inside the quotation marks regardless of whether the material is fiction. Hart's Rules and the Oxford Dictionary for Writers and Editors call the British style "new" quoting. It is also similar to the use of quotation marks in many other languages (including Portuguese, Spanish, French, Italian, Catalan, Dutch and German). A few US professional societies whose professions frequently employ various non-word characters, such as chemistry and computer programming, use the British form in their style guides (see ACS Style Guide). According to the Jargon File from 1983, American hackers (members of a subculture of enthusiastic programmers) switched to what they later discovered to be the British quotation system because placing a period inside a quotation mark can change the meaning of data strings that are meant to be typed character-for-character. Some American style guides specific to certain specialties also prefer the British style. For example, the journal Language of the Linguistic Society of America requires that the closing quotation mark precede the period or comma unless that period or comma is "a necessary part of the quoted matter". The websites Wikipedia and Pitchfork use logical punctuation. American style In the United States, the prevailing style is called American style, whereby commas and periods are almost always placed inside closing quotation marks. This is done because it results in closer spacing and what is judged to be a cleaner appearance. The American style is used by most newspapers, publishing houses, and style guides in the United States and, to a lesser extent, Canada as well. When dealing with words-as-words, short-form works, and sentence fragments, standard American style places periods and commas inside the quotation marks: This style also places periods and commas inside the quotation marks when dealing with direct speech, regardless of whether the work is fiction or non-fiction: Nevertheless, many American style guides explicitly permit periods and commas outside the quotation marks when the presence of the punctuation mark inside the quotation marks leads to ambiguity, such as when describing keyboard input, as in the following example: The American style is recommended by the Modern Language Association's MLA Style Manual, the American Psychological Association's APA Publication Manual, the University of Chicago's The Chicago Manual of Style, the American Institute of Physics's AIP Style Manual, the American Medical Association's AMA Manual of Style, the American Political Science Association's APSA Style Manual, the Associated Press' The AP Guide to Punctuation, and the Canadian Public Works' The Canadian Style. This style is also used in some British news and fiction. Ending the sentence In both major styles, regardless of placement, only one end mark (?, !, or .) can end a sentence. Only the period, however, may not end a quoted sentence when it does not also end the enclosing sentence, except for literal text: With narration of direct speech, both styles retain punctuation inside the quotation marks, with a full stop changing into a comma if followed by attributive matter, also known as a speech tag or annunciatory clause. Americans tend to apply quotations when signifying doubt of veracity (sarcastically or seriously), to imply another meaning to a word or to imply a cynical take on a paraphrased quotation, without punctuation at all. Typographical considerations Primary quotations versus secondary quotations Primary quotations are orthographically distinguished from secondary quotations that may be nested within a primary quotation. British English often uses single quotation marks to identify the outermost text of a primary quotation versus double quotation marks for inner, nested quotations. By contrast, American English typically uses double quotation marks to identify the outermost text of a primary quotation versus single quotation marks for inner, nested quotations. British usage does vary, with some authoritative sources such as The Economist and The Times recommending the same usage as in the US, whereas other authoritative sources, such as The King's English, Fowler's, and New Hart's Rules, recommend single quotation marks. In journals and newspapers, quotation mark double/single use often depends on the individual publication's house style. Spacing In English, when a quotation follows other writing on a line of text, a space precedes the opening quotation mark unless the preceding symbol, such as an em dash, requires that there be no space. When a quotation is followed by other writing on a line of text, a space follows the closing quotation mark unless it is immediately followed by other punctuation within the sentence, such as a colon or closing punctuation. (These exceptions are ignored by some Asian computer systems that systematically display quotation marks with the included spacing, as this spacing is part of the fixed-width characters.) There is generally no space between an opening quotation mark and the following word, or a closing quotation mark and the preceding word. When a double quotation mark or a single quotation mark immediately follows the other, proper spacing for legibility may suggest that a thin space (&thinsp;) or larger non-breaking space (&nbsp;) be inserted. This is not common practice in mainstream publishing, which will generally use more precise kerning. It is more common in online writing, although using CSS to create the spacing by kerning is more semantically appropriate in Web typography than inserting extraneous spacing characters. Non-language-related usage Straight quotation marks (or italicised straight quotation marks) are often used to approximate the prime and double prime, e.g. when signifying feet and inches or arcminutes and arcseconds. For instance, 5 feet and 6 inches is often written 5' 6"; and 40 degrees, 20 arcminutes, and 50 arcseconds is written 40° 20' 50". When available, however, primes should be used instead (e.g. 5 6, and 40° 20 50). Prime and double prime are not present in most code pages, including ASCII and Latin-1, but are present in Unicode, as characters and . The HTML character entity references are and , respectively. Double quotation marks, or pairs of single ones, also represent the ditto mark. Straight single and double quotation marks are used in most programming languages to delimit strings or literal characters, collectively known as string literals. In some languages (e.g. Pascal) only one type is allowed, in some (e.g. C and its derivatives) both are used with different meanings and in others (e.g. Python) both are used interchangeably. In some languages, if it is desired to include the same quotation marks used to delimit a string inside the string, the quotation marks are doubled. For example, to represent the string in Pascal one uses 'eat ''hot'' dogs'. Other languages use an escape character, often the backslash, as in 'eat \'hot\' dogs'. In the TeX typesetting program, left double quotes are produced by typing two back-ticks () and right double quotes by typing two apostrophes ({{Code|}}). This is a continuation of a typewriter tradition of using ticks for opening quotation marks; see . Typing quotation marks on a computer keyboard Standard English computer keyboard layouts inherited the single and double straight quotation marks from the typewriter (the single quotation mark also doubling as an apostrophe), and they do not include individual keys for left-handed and right-handed typographic quotation marks. In character encoding terms, these characters are labeled unidirectional. However, most computer text-editing programs provide a "smart quotes" feature to automatically convert straight quotation marks into bidirectional punctuation, though sometimes imperfectly . Generally, this smart quote feature is enabled by default, and it can be turned off in an "options" or "preferences" dialog. Some websites do not allow typographic quotation marks or apostrophes in posts. One can skirt these limitations, however, by using the HTML character codes or entities or the other key combinations in the following table. In Windows, AutoHotkey scripts can be used to assign simpler key combinations to opening and closing quotation marks. Smart quotes To make typographic quotation marks easier to enter, publishing software often automatically converts typewriter quotation marks (and apostrophes) to typographic form during text entry (with or without the user being aware of it). Out-of-the-box behavior on macOS and iOS is to make this conversion. These are known as smart quotes (). Straight quotation marks are also retronymically called dumb quotes'' (). The method for producing smart quotes may be based solely on the character preceding the mark. If it is a space or another of a set of hard-coded characters or if the mark begins a line, the mark will be rendered as an opening quote; if not, it will be rendered as a closing quote or apostrophe. This method can cause errors, especially for contractions that start with an apostrophe or text with nested quotations: In Windows, if it is necessary to follow a space with a closing quotation mark when Smart Quotes is in effect, it is usually sufficient to input the character using the Alt code shown above rather than typing or . See also Guillemet, a quotation mark used in a number of languages International variation in quotation marks Modifier letter double apostrophe ʻOkina Typewriter conventions Western Latin character sets (computing) References External links Curling Quotes in HTML, SGML, and XML Quotation marks in the Unicode Common Locale Data Repository ASCII and Unicode quotation marks discussion of the problem of ASCII grave accent characters used as left quotation marks Commonly confused characters Quotation mark Punctuation of English Typographical symbols
Quotation marks in English
[ "Mathematics" ]
4,713
[ "Symbols", "Typographical symbols" ]
59,358
https://en.wikipedia.org/wiki/Mycorrhiza
A mycorrhiza (; , mycorrhiza, or mycorrhizas) is a symbiotic association between a fungus and a plant. The term mycorrhiza refers to the role of the fungus in the plant's rhizosphere, the plant root system and its surroundings. Mycorrhizae play important roles in plant nutrition, soil biology, and soil chemistry. In a mycorrhizal association, the fungus colonizes the host plant's root tissues, either intracellularly as in arbuscular mycorrhizal fungi, or extracellularly as in ectomycorrhizal fungi. The association is normally mutualistic. In particular species, or in particular circumstances, mycorrhizae may have a parasitic association with host plants. Definition A mycorrhiza is a symbiotic association between a green plant and a fungus. The plant makes organic molecules by photosynthesis and supplies them to the fungus in the form of sugars or lipids, while the fungus supplies the plant with water and mineral nutrients, such as phosphorus, taken from the soil. Mycorrhizas are located in the roots of vascular plants, but mycorrhiza-like associations also occur in bryophytes and there is fossil evidence that early land plants that lacked roots formed arbuscular mycorrhizal associations. Most plant species form mycorrhizal associations, though some families like Brassicaceae and Chenopodiaceae cannot. Different forms for the association are detailed in the next section. The most common is the arbuscular type that is present in 70% of plant species, including many crop plants such as cereals and legumes. Evolution Fossil and genetic evidence indicate that mycorrhizae are ancient, potentially as old as the terrestrialization of plants. Genetic evidence indicates that all land plants share a single common ancestor, which appears to have quickly adopted mycorrhizal symbiosis, and research suggests that proto-mycorrhizal fungi were a key factor enabling plant terrestrialization. The 400 million year old Rhynie chert contains an assemblage of fossil plants preserved in sufficient detail that arbuscular mycorrhizae have been observed in the stems of Aglaophyton major, giving a lower bound for how late mycorrhizal symbiosis may have developed. Ectomycorrhizae developed substantially later, during the Jurassic period, while most other modern mycorrhizal families, including orchid and ericoid mycorrhizae, date to the period of angiosperm radiation in the Cretaceous period. There is genetic evidence that the symbiosis between legumes and nitrogen-fixing bacteria is an extension of mycorrhizal symbiosis. The modern distribution of mycorrhizal fungi appears to reflect an increasing complexity and competition in root morphology associated with the dominance of angiosperms in the Cenozoic Era, characterized by complex ecological dynamics between species. Types The mycorrhizal lifestyle has independently convergently evolved multiple times in the history of Earth. There are multiple ways to categorize mycorrhizal symbiosis. One major categorization is the division between ectomycorrhizas and endomycorrhizas. The two types are differentiated by the fact that the hyphae of ectomycorrhizal fungi do not penetrate individual cells within the root, while the hyphae of endomycorrhizal fungi penetrate the cell wall and invaginate the cell membrane. Similar symbiotic relationships Some forms of plant-fungal symbiosis are similar to mycorrhizae, but considered distinct. One example is fungal endophytes. Endophytes are defined as organisms that can live within plant cells without causing harm to the plant. They are distinguishable from mycorrhizal fungi by the absence of nutrient-transferring structures for bringing in nutrients from outside the plant. Some lineages of mycorrhizal fungi may have evolved from endophytes into mycorrhizal fungi, and some fungi can live as mycorrhizae or as endophytes. Ectomycorrhiza Ectomycorrhizae are distinct in that they do not penetrate into plant cells, but instead form a structure called a Hartig net that penetrates between cells. Ectomycorrhizas consist of a hyphal sheath, or mantle, covering the root tip and the Hartig net of hyphae surrounding the plant cells within the root cortex. In some cases the hyphae may also penetrate the plant cells, in which case the mycorrhiza is called an endomycorrhiza. Outside the root, ectomycorrhizal extramatrical mycelium forms an extensive network within the soil and leaf litter. Other forms of mycorrhizae, including arbuscular, ericoid, arbutoid, monotropoid, and orchid mycorrhizas, are considered endomycorrhizae. Ectomycorrhizas, or EcM, are symbiotic associations between the roots of around 10% of plant families, mostly woody plants including the birch, dipterocarp, eucalyptus, oak, pine, and rose families, orchids, and fungi belonging to the Basidiomycota, Ascomycota, and Zygomycota. Ectomycorrhizae associate with relatively few plant species, only about 2% of plant species on Earth, but the species they associate with are mostly trees and woody plants that are highly dominant in their ecosystems, meaning plants in ectomycorrhizal relationships make up a large proportion of plant biomass. Some EcM fungi, such as many Leccinum and Suillus, are symbiotic with only one particular genus of plant, while other fungi, such as the Amanita, are generalists that form mycorrhizas with many different plants. An individual tree may have 15 or more different fungal EcM partners at one time. While the diversity of plants involved in EcM is low, the diversity of fungi involved in EcM is high. Thousands of ectomycorrhizal fungal species exist, hosted in over 200 genera. A recent study has conservatively estimated global ectomycorrhizal fungal species richness at approximately 7750 species, although, on the basis of estimates of knowns and unknowns in macromycete diversity, a final estimate of ECM species richness would probably be between 20,000 and 25,000. Ectomycorrhizal fungi evolved independently from saprotrophic ancestors many times in the group's history. Nutrients can be shown to move between different plants through the fungal network. Carbon has been shown to move from paper birch seedlings into adjacent Douglas-fir seedlings, although not conclusively through a common mycorrhizal network, thereby promoting succession in ecosystems. The ectomycorrhizal fungus Laccaria bicolor has been found to lure and kill springtails to obtain nitrogen, some of which may then be transferred to the mycorrhizal host plant. In a study by Klironomos and Hart, Eastern White Pine inoculated with L. bicolor was able to derive up to 25% of its nitrogen from springtails. When compared with non-mycorrhizal fine roots, ectomycorrhizae may contain very high concentrations of trace elements, including toxic metals (cadmium, silver) or chlorine. The first genomic sequence for a representative of symbiotic fungi, the ectomycorrhizal basidiomycete L. bicolor, was published in 2008. An expansion of several multigene families occurred in this fungus, suggesting that adaptation to symbiosis proceeded by gene duplication. Within lineage-specific genes those coding for symbiosis-regulated secreted proteins showed an up-regulated expression in ectomycorrhizal root tips suggesting a role in the partner communication. L. bicolor is lacking enzymes involved in the degradation of plant cell wall components (cellulose, hemicellulose, pectins and pectates), preventing the symbiont from degrading host cells during the root colonisation. By contrast, L. bicolor possesses expanded multigene families associated with hydrolysis of bacterial and microfauna polysaccharides and proteins. This genome analysis revealed the dual saprotrophic and biotrophic lifestyle of the mycorrhizal fungus that enables it to grow within both soil and living plant roots. Since then, the genomes of many other ectomycorrhizal fungal species have been sequenced further expanding the study of gene families and evolution in these organisms. Arbutoid mycorrhiza This type of mycorrhiza involves plants of the Ericaceae subfamily Arbutoideae. It is however different from ericoid mycorrhiza and resembles ectomycorrhiza, both functionally and in terms of the fungi involved. It differs from ectomycorrhiza in that some hyphae actually penetrate into the root cells, making this type of mycorrhiza an ectendomycorrhiza. Arbuscular mycorrhiza Arbuscular mycorrhizas, (formerly known as vesicular-arbuscular mycorrhizas), have hyphae that penetrate plant cells, producing branching, tree-like structures called arbuscules within the plant cells for nutrient exchange. Often, balloon-like storage structures, termed vesicles, are also produced. In this interaction, fungal hyphae do not in fact penetrate the protoplast (i.e. the interior of the cell), but invaginate the cell membrane, creating a so-called peri-arbuscular membrane. The structure of the arbuscules greatly increases the contact surface area between the hypha and the host cell cytoplasm to facilitate the transfer of nutrients between them. Arbuscular mycorrhizas are obligate biotrophs, meaning that they depend upon the plant host for both growth and reproduction; they have lost the ability to sustain themselves by decomposing dead plant material. Twenty percent of the photosynthetic products made by the plant host are consumed by the fungi, the transfer of carbon from the terrestrial host plant is then exchanged by equal amounts of phosphate from the fungi to the plant host. Contrasting with the pattern seen in ectomycorrhizae, the species diversity of AMFs is very low, but the diversity of plant hosts is very high; an estimated 78% of all plant species associate with AMFs. Arbuscular mycorrhizas are formed only by fungi in the division Glomeromycota. Fossil evidence and DNA sequence analysis suggest that this mutualism appeared 400-460 million years ago, when the first plants were colonizing land. Arbuscular mycorrhizas are found in 85% of all plant families, and occur in many crop species. The hyphae of arbuscular mycorrhizal fungi produce the glycoprotein glomalin, which may be one of the major stores of carbon in the soil. Arbuscular mycorrhizal fungi have (possibly) been asexual for many millions of years and, unusually, individuals can contain many genetically different nuclei (a phenomenon called heterokaryosis). Mucoromycotina fine root endophytes Mycorrhizal fungi belonging to Mucoromycotina, known as “fine root endophytes" (MFREs), were mistakenly identified as arbuscular mycorrhizal fungi until recently. While similar to AMF, MFREs are from subphylum Mucoromycotina instead of Glomeromycotina. Their morphology when colonizing a plant root is very similar to AMF, but they form fine textured hyphae. Effects of MFREs may have been mistakenly attributed to AMFs due to confusion between the two, complicated by the fact that AMFs and MFREs often colonize the same hosts simultaneously. Unlike AMFs, they appear capable of surviving without a host. This group of mycorrhizal fungi is little understood, but appears to prefer wet, acidic soils and forms symbiotic relationships with liverworts, hornworts, lycophytes, and angiosperms. Ericoid mycorrhiza Ericoid mycorrhizae, or ErMs, involve only plants in Ericales and are the most recently evolved of the major mycorrhizal relationships. Plants that form ericoid mycorrhizae are mostly woody understory shrubs; hosts include blueberries, bilberries, cranberries, mountain laurels, rhododendrons, heather, neinei, and giant grass tree. ErMs are most common in boreal forests, but are found in two-thirds of all forests on Earth. Ericoid mycorrhizal fungi belong to several different lineages of fungi. Some species can live as endophytes entirely within plant cells even within plants outside the Ericales, or live independently as saprotrophs that decompose dead organic matter. This ability to switch between multiple lifestyle types makes ericoid mycorrhizal fungi very adaptable. Plants that participate in these symbioses have specialized roots with no root hairs, which are covered with a layer of epidermal cells that the fungus penetrates into and completely occupies. The fungi have a simple intraradical (growth in cells) phase, consisting of dense coils of hyphae in the outermost layer of root cells. There is no periradical phase and the extraradical phase consists of sparse hyphae that don't extend very far into the surrounding soil. They might form sporocarps (probably in the form of small cups), but their reproductive biology is poorly understood. Plants participating in ericoid mycorrhizal symbioses are found in acidic, nutrient-poor conditions. Whereas AMFs have lost their saprotrophic capabilities, and EcM fungi have significant variation in their ability to produce enzymes needed for a saprotrophic lifestyle, fungi involved in ErMs have fully retained the ability to decompose plant material for sustenance. Some ericoid mycorrhizal fungi have actually expanded their repertoire of enzymes for breaking down organic matter. They can extract nitrogen from cellulose, hemicellulose, lignin, pectin, and chitin. This would increase the benefit they can provide to their plant symbiotic partners. Orchid mycorrhiza All orchids are myco-heterotrophic at some stage during their lifecycle, meaning that they can survive only if they form orchid mycorrhizae. Orchid seeds are so small that they contain no nutrition to sustain the germinating seedling, and instead must gain the energy to grow from their fungal symbiont. The OM relationship is asymmetric; the plant seems to benefit more than the fungus, and some orchids are entirely mycoheterotrophic, lacking chlorophyll for photosynthesis. It is actually unknown whether fully autotrophic orchids that do not receive some of their carbon from fungi exist or not. Like fungi that form ErMs, OM fungi can sometimes live as endophytes or as independent saprotrophs. In the OM symbiosis, hyphae penetrate into the root cells and form pelotons (coils) for nutrient exchange. Monotropoid mycorrhiza This type of mycorrhiza occurs in the subfamily Monotropoideae of the Ericaceae, as well as several genera in the Orchidaceae. These plants are heterotrophic or mixotrophic and derive their carbon from the fungus partner. This is thus a non-mutualistic, parasitic type of mycorrhizal symbiosis. Function Mycorrhizal fungi form a mutualistic relationship with the roots of most plant species. In such a relationship, both the plants themselves and those parts of the roots that host the fungi, are said to be mycorrhizal. Relatively few of the mycorrhizal relationships between plant species and fungi have been examined to date, but 95% of the plant families investigated are predominantly mycorrhizal either in the sense that most of their species associate beneficially with mycorrhizae, or are absolutely dependent on mycorrhizae. The Orchidaceae are notorious as a family in which the absence of the correct mycorrhizae is fatal even to germinating seeds. Recent research into ectomycorrhizal plants in boreal forests has indicated that mycorrhizal fungi and plants have a relationship that may be more complex than simply mutualistic. This relationship was noted when mycorrhizal fungi were unexpectedly found to be hoarding nitrogen from plant roots in times of nitrogen scarcity. Researchers argue that some mycorrhizae distribute nutrients based upon the environment with surrounding plants and other mycorrhizae. They go on to explain how this updated model could explain why mycorrhizae do not alleviate plant nitrogen limitation, and why plants can switch abruptly from a mixed strategy with both mycorrhizal and nonmycorrhizal roots to a purely mycorrhizal strategy as soil nitrogen availability declines. It has also been suggested that evolutionary and phylogenetic relationships can explain much more variation in the strength of mycorrhizal mutualisms than ecological factors. Formation To successfully engage in mutualistic symbiotic relationships with other organisms, such as mycorrhizal fungi and any of the thousands of microbes that colonize plants, plants must discriminate between mutualists and pathogens, allowing the mutualists to colonize while activating an immune response towards the pathogens. Plant genomes code for potentially hundreds of receptors for detecting chemical signals from other organisms. Plants dynamically adjust their symbiotic and immune responses, changing their interactions with their symbionts in response to feedbacks detected by the plant. In plants, the mycorrhizal symbiosis is regulated by the common symbiosis signaling pathway (CSSP), a set of genes involved in initiating and maintaining colonization by endosymbiotic fungi and other endosymbionts such as Rhizobia in legumes. The CSSP has origins predating the colonization of land by plants, demonstrating that the co-evolution of plants and arbuscular mycorrhizal fungi is over 500 million years old. In arbuscular mycorrhizal fungi, the presence of strigolactones, a plant hormone, secreted from roots induces fungal spores in the soil to germinate, stimulates their metabolism, growth and branching, and prompts the fungi to release chemical signals the plant can detect. Once the plant and fungus recognize one another as suitable symbionts, the plant activates the common symbiotic signaling pathway, which causes changes in the root tissues that enable the fungus to colonize. Experiments with arbuscular mycorrhizal fungi have identified numerous chemical compounds to be involved in the "chemical dialog" that occurs between the prospective symbionts before symbiosis is begun. In plants, almost all plant hormones play a role in initiating or regulating AMF symbiosis, and other chemical compounds are also suspected to have a signaling function. While the signals emitted by the fungi are less understood, it has been shown that chitinaceous molecules known as Myc factors are essential for the formation of arbuscular mycorrhizae. Signals from plants are detected by LysM-containing receptor-like kinases, or LysM-RLKs. AMF genomes also code for potentially hundreds of effector proteins, of which only a few have a proven effect on mycorrhizal symbiosis, but many others likely have a function in communication with plant hosts as well. Many factors are involved in the initiation of mycorrhizal symbiosis, but particularly influential is the plant's need for phosphorus. Experiments involving rice plants with a mutation disabling their ability to detect P starvation show that arbuscular mycorrhizal fungi detection, recruitment and colonization is prompted when the plant detects that it is starved of phosphorus. Nitrogen starvation also plays a role in initiating AMF symbiosis. Mechanisms The mechanisms by which mycorrhizae increase absorption include some that are physical and some that are chemical. Physically, most mycorrhizal mycelia are much smaller in diameter than the smallest root or root hair, and thus can explore soil material that roots and root hairs cannot reach, and provide a larger surface area for absorption. Chemically, the cell membrane chemistry of fungi differs from that of plants. For example, they may secrete organic acids that dissolve or chelate many ions, or release them from minerals by ion exchange. Mycorrhizae are especially beneficial for the plant partner in nutrient-poor soils. Sugar-water/mineral exchange The mycorrhizal mutualistic association provides the fungus with relatively constant and direct access to carbohydrates, such as glucose and sucrose. The carbohydrates are translocated from their source (usually leaves) to root tissue and on to the plant's fungal partners. In return, the plant gains the benefits of the mycelium's higher absorptive capacity for water and mineral nutrients, partly because of the large surface area of fungal hyphae, which are much longer and finer than plant root hairs, and partly because some such fungi can mobilize soil minerals unavailable to the plants' roots. The effect is thus to improve the plant's mineral absorption capabilities. Unaided plant roots may be unable to take up nutrients that are chemically or physically immobilised; examples include phosphate ions and micronutrients such as iron. One form of such immobilization occurs in soil with high clay content, or soils with a strongly basic pH. The mycelium of the mycorrhizal fungus can, however, access many such nutrient sources, and make them available to the plants they colonize. Thus, many plants are able to obtain phosphate without using soil as a source. Another form of immobilisation is when nutrients are locked up in organic matter that is slow to decay, such as wood, and some mycorrhizal fungi act directly as decay organisms, mobilising the nutrients and passing some onto the host plants; for example, in some dystrophic forests, large amounts of phosphate and other nutrients are taken up by mycorrhizal hyphae acting directly on leaf litter, bypassing the need for soil uptake. Inga alley cropping, an agroforestry technique proposed as an alternative to slash and burn rainforest destruction, relies upon mycorrhiza within the root system of species of Inga to prevent the rain from washing phosphorus out of the soil. In some more complex relationships, mycorrhizal fungi do not just collect immobilised soil nutrients, but connect individual plants together by mycorrhizal networks that transport water, carbon, and other nutrients directly from plant to plant through underground hyphal networks. Suillus tomentosus, a basidiomycete fungus, produces specialized structures known as tuberculate ectomycorrhizae with its plant host lodgepole pine (Pinus contorta var. latifolia). These structures have been shown to host nitrogen fixing bacteria which contribute a significant amount of nitrogen and allow the pines to colonize nutrient-poor sites. Disease, drought and salinity resistance and its correlation to mycorrhizae Mycorrhizal plants are often more resistant to diseases, such as those caused by microbial soil-borne pathogens. These associations have been found to assist in plant defense both above and belowground. Mycorrhizas have been found to excrete enzymes that are toxic to soil borne organisms such as nematodes. More recent studies have shown that mycorrhizal associations result in a priming effect of plants that essentially acts as a primary immune response. When this association is formed a defense response is activated similarly to the response that occurs when the plant is under attack. As a result of this inoculation, defense responses are stronger in plants with mycorrhizal associations. Ecosystem services provided by mycorrhizal fungi may depend on the soil microbiome. Furthermore, mycorrhizal fungi was significantly correlated with soil physical variable, but only with water level and not with aggregate stability and can lead also to more resistant to the effects of drought. Moreover, the significance of mycorrhizal fungi also includes alleviation of salt stress and its beneficial effects on plant growth and productivity. Although salinity can negatively affect mycorrhizal fungi, many reports show improved growth and performance of mycorrhizal plants under salt stress conditions. Resistance to insects Plants connected by mycorrhizal fungi in mycorrhizal networks can use these underground connections to communicate warning signals. For example, when a host plant is attacked by an aphid, the plant signals surrounding connected plants of its condition. Both the host plant and those connected to it release volatile organic compounds that repel aphids and attract parasitoid wasps, predators of aphids. This assists the mycorrhizal fungi by conserving its food supply. Colonization of barren soil Plants grown in sterile soils and growth media often perform poorly without the addition of spores or hyphae of mycorrhizal fungi to colonise the plant roots and aid in the uptake of soil mineral nutrients. The absence of mycorrhizal fungi can also slow plant growth in early succession or on degraded landscapes. The introduction of alien mycorrhizal plants to nutrient-deficient ecosystems puts indigenous non-mycorrhizal plants at a competitive disadvantage. This aptitude to colonize barren soil is defined by the category Oligotroph. Resistance to toxicity Fungi have a protective role for plants rooted in soils with high metal concentrations, such as acidic and contaminated soils. Pine trees inoculated with Pisolithus tinctorius planted in several contaminated sites displayed high tolerance to the prevailing contaminant, survivorship and growth. One study discovered the existence of Suillus luteus strains with varying tolerance of zinc. Another study discovered that zinc-tolerant strains of Suillus bovinus conferred resistance to plants of Pinus sylvestris. This was probably due to binding of the metal to the extramatricial mycelium of the fungus, without affecting the exchange of beneficial substances. Occurrence of mycorrhizal associations Mycorrhizas are present in 92% of plant families studied (80% of species), with arbuscular mycorrhizas being the ancestral and predominant form, and the most prevalent symbiotic association found in the plant kingdom. The structure of arbuscular mycorrhizas has been highly conserved since their first appearance in the fossil record, with both the development of ectomycorrhizas and the loss of mycorrhizas, evolving convergently on multiple occasions. Associations of fungi with the roots of plants have been known since at least the mid-19th century. However, early observers simply recorded the fact without investigating the relationships between the two organisms. This symbiosis was studied and described by Franciszek Kamieński in 1879–1882. Climate change CO2 released by human activities is causing climate change and possible damage to mycorrhizae, but the direct effect of an increase in the gas should be to benefit plants and mycorrhizae. In Arctic regions, nitrogen and water are harder for plants to obtain, making mycorrhizae crucial to plant growth. Since mycorrhizae tend to do better in cooler temperatures, warming could be detrimental to them. Gases such as SO2, NO-x, and O3 produced by human activity may harm mycorrhizae, causing reduction in "propagules, the colonization of roots, degradation in connections between trees, reduction in the mycorrhizal incidence in trees, and reduction in the enzyme activity of ectomycorrhizal roots." A company in Israel, Groundwork BioAg, has discovered a method of using mycorrhizal fungi to increase agricultural crops while sequestering greenhouse gases and eliminating CO2 from the atmosphere. Conservation and mapping In 2021, the Society for the Protection of Underground Networks was launched. SPUN is a science-based initiative to map and protect the mycorrhizal networks regulating Earth’s climate and ecosystems. Its stated goals are mapping, protecting, and harnessing mycorrhizal fungi. See also Effect of climate change on plant biodiversity Endosymbiont Epibiont, an organism that grows on another life form Endophyte Epiphyte Epiphytic fungus Mucigel Mycorrhizal fungi and soil carbon storage Mycorrhizal network Rhizobia Suzanne Simard References External links International Mycorrhiza Society International Mycorrhiza Society Mohamed Hijri: A simple solution to the coming phosphorus crisis video recommending agricultural mycorrhiza use to conserve phosphorus reserves & 85% waste problem @Ted.com Mycorrhizal Associations: The Web Resource Comprehensive illustrations and lists of mycorrhizal and nonmycorrhizal plants and fungi Mycorrhizas – a successful symbiosis Biosafety research into genetically modified barley MycorWiki a portal concerned with the biology and ecology of ectomycorrhizal fungi and other forest fungi. Plant roots Soil biology Symbiosis Oligotrophs Fungus ecology
Mycorrhiza
[ "Biology" ]
6,259
[ "Fungi", "Symbiosis", "Behavior", "Biological interactions", "Fungus ecology", "Soil biology" ]
59,366
https://en.wikipedia.org/wiki/Large%20intestine
The large intestine, also known as the large bowel, is the last part of the gastrointestinal tract and of the digestive system in tetrapods. Water is absorbed here and the remaining waste material is stored in the rectum as feces before being removed by defecation. The colon (progressing from the ascending colon to the transverse, the descending and finally the sigmoid colon) is the longest portion of the large intestine, and the terms "large intestine" and "colon" are often used interchangeably, but most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the anal canal. In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about long, which is about one-fifth of the whole length of the human gastrointestinal tract. Structure The colon of the large intestine is the last part of the digestive system. It has a segmented appearance due to a series of saccules called haustra. It extracts water and salt from solid wastes before they are eliminated from the body and is the site in which the fermentation of unabsorbed material by the gut microbiota occurs. Unlike the small intestine, the colon does not play a major role in absorption of foods and nutrients. About 1.5 litres or 45 ounces of water arrives in the colon each day. The colon is the longest part of the large intestine and its average length in the adult human is 65 inches or 166 cm (range of 80 to 313 cm) for males, and 61 inches or 155 cm (range of 80 to 214 cm) for females. Sections In mammals, the large intestine consists of the cecum (including the appendix), colon (the longest part), rectum, and anal canal. The four sections of the colon are: the ascending colon, transverse colon, descending colon, and sigmoid colon. These sections turn at the colic flexures. The parts of the colon are either intraperitoneal or behind it in the retroperitoneum. Retroperitoneal organs, in general, do not have a complete covering of peritoneum, so they are fixed in location. Intraperitoneal organs are completely surrounded by peritoneum and are therefore mobile. Of the colon, the ascending colon, descending colon and rectum are retroperitoneal, while the cecum, appendix, transverse colon and sigmoid colon are intraperitoneal. This is important as it affects which organs can be easily accessed during surgery, such as a laparotomy. In terms of diameter, the cecum is the widest, averaging slightly less than 9 cm in healthy individuals, and the transverse colon averages less than 6 cm in diameter. The descending and sigmoid colon are slightly smaller, with the sigmoid colon averaging in diameter. Diameters larger than certain thresholds for each colonic section can be diagnostic for megacolon. Cecum and appendix The cecum is the first section of the large intestine and is involved in digestion, while the appendix which develops embryologically from it, is not involved in digestion and is considered to be part of the gut-associated lymphoid tissue. The function of the appendix is uncertain, but some sources believe that it has a role in housing a sample of the gut microbiota, and is able to help to repopulate the colon with microbiota if depleted during the course of an immune reaction. The appendix has also been shown to have a high concentration of lymphatic cells. Ascending colon The ascending colon is the first of four main sections of the large intestine. It is connected to the small intestine by a section of bowel called the cecum. The ascending colon runs upwards through the abdominal cavity toward the transverse colon for approximately eight inches (20 cm). One of the main functions of the colon is to remove the water and other key nutrients from waste material and recycle it. As the waste material exits the small intestine through the ileocecal valve, it will move into the cecum and then to the ascending colon where this process of extraction starts. The waste material is pumped upwards toward the transverse colon by peristalsis. The ascending colon is sometimes attached to the appendix via Gerlach's valve. In ruminants, the ascending colon is known as the spiral colon. Taking into account all ages and sexes, colon cancer occurs here most often (41%). Transverse colon The transverse colon is the part of the colon from the hepatic flexure, also known as the right colic, (the turn of the colon by the liver) to the splenic flexure also known as the left colic, (the turn of the colon by the spleen). The transverse colon hangs off the stomach, attached to it by a large fold of peritoneum called the greater omentum. On the posterior side, the transverse colon is connected to the posterior abdominal wall by a mesentery known as the transverse mesocolon. The transverse colon is encased in peritoneum, and is therefore mobile (unlike the parts of the colon immediately before and after it). The proximal two-thirds of the transverse colon is perfused by the middle colic artery, a branch of the superior mesenteric artery (SMA), while the latter third is supplied by branches of the inferior mesenteric artery (IMA). The "watershed" area between these two blood supplies, which represents the embryologic division between the midgut and hindgut, is an area sensitive to ischemia. Descending colon The descending colon is the part of the colon from the splenic flexure to the beginning of the sigmoid colon. One function of the descending colon in the digestive system is to store feces that will be emptied into the rectum. It is retroperitoneal in two-thirds of humans. In the other third, it has a (usually short) mesentery. The arterial supply comes via the left colic artery. The descending colon is also called the distal gut, as it is further along the gastrointestinal tract than the proximal gut. Gut flora are very dense in this region. Sigmoid colon The sigmoid colon is the part of the large intestine after the descending colon and before the rectum. The name sigmoid means S-shaped (see sigmoid; cf. sigmoid sinus). The walls of the sigmoid colon are muscular and contract to increase the pressure inside the colon, causing the stool to move into the rectum. The sigmoid colon is supplied with blood from several branches (usually between 2 and 6) of the sigmoid arteries, a branch of the IMA. The IMA terminates as the superior rectal artery. Sigmoidoscopy is a common diagnostic technique used to examine the sigmoid colon. Rectum The rectum is the last section of the large intestine. It holds the formed feces awaiting elimination via defecation. It is about 12 cm long. Appearance The cecum – the first part of the large intestine Taeniae coli – three bands of smooth muscle Haustra – bulges caused by contraction of taeniae coli Epiploic appendages – small fat accumulations on the viscera The taenia coli run the length of the large intestine. Because the taenia coli are shorter than the large bowel itself, the colon becomes sacculated, forming the haustra of the colon which are the shelf-like intraluminal projections. Blood supply Arterial supply to the colon comes from branches of the superior mesenteric artery (SMA) and inferior mesenteric artery (IMA). Flow between these two systems communicates via the marginal artery of the colon that runs parallel to the colon for its entire length. Historically, a structure variously identified as the arc of Riolan or meandering mesenteric artery (of Moskowitz) was thought to connect the proximal SMA to the proximal IMA. This variably present structure would be important if either vessel were occluded. However, at least one review of the literature questions the existence of this vessel, with some experts calling for the abolition of these terms from future medical literature. Venous drainage usually mirrors colonic arterial supply, with the inferior mesenteric vein draining into the splenic vein, and the superior mesenteric vein joining the splenic vein to form the hepatic portal vein that then enters the liver. Middle rectal veins are an exception, delivering blood to inferior vena cava and bypassing the liver. Lymphatic drainage Lymphatic drainage from the ascending colon and proximal two-thirds of the transverse colon is to the ileocolic lymph nodes and the superior mesenteric lymph nodes, which drain into the cisterna chyli. The lymph from the distal one-third of the transverse colon, the descending colon, the sigmoid colon, and the upper rectum drain into the inferior mesenteric and colic lymph nodes. The lower rectum to the anal canal above the pectinate line drain to the internal ileocolic nodes. The anal canal below the pectinate line drains into the superficial inguinal nodes. The pectinate line only roughly marks this transition. Nerve supply Sympathetic supply: superior & inferior mesenteric ganglia; parasympathetic supply: vagus & sacral plexus (S2-S4) Development The endoderm, mesoderm and ectoderm are germ layers that develop in a process called gastrulation. Gastrulation occurs early in human development. The gastrointestinal tract is derived from these layers. Variation One variation on the normal anatomy of the colon occurs when extra loops form, resulting in a colon that is up to five metres longer than normal. This condition, referred to as redundant colon, typically has no direct major health consequences, though rarely volvulus occurs, resulting in obstruction and requiring immediate medical attention. A significant indirect health consequence is that use of a standard adult colonoscope is difficult and in some cases impossible when a redundant colon is present, though specialized variants on the instrument (including the pediatric variant) are useful in overcoming this problem. Microanatomy Colonic crypts The wall of the large intestine is lined with simple columnar epithelium with invaginations. The invaginations are called the intestinal glands or colonic crypts. The colon crypts are shaped like microscopic thick walled test tubes with a central hole down the length of the tube (the crypt lumen). Four tissue sections are shown here, two cut across the long axes of the crypts and two cut parallel to the long axes. In these images the cells have been stained by immunohistochemistry to show a brown-orange color if the cells produce a mitochondrial protein called cytochrome c oxidase subunit I (CCOI). The nuclei of the cells (located at the outer edges of the cells lining the walls of the crypts) are stained blue-gray with haematoxylin. As seen in panels C and D, crypts are about 75 to about 110 cells long. Baker et al. found that the average crypt circumference is 23 cells. Thus, by the images shown here, there are an average of about 1,725 to 2,530 cells per colonic crypt. Nooteboom et al. measuring the number of cells in a small number of crypts reported a range of 1,500 to 4,900 cells per colonic crypt. Cells are produced at the crypt base and migrate upward along the crypt axis before being shed into the colonic lumen days later. There are 5 to 6 stem cells at the bases of the crypts. As estimated from the image in panel A, there are about 100 colonic crypts per square millimeter of the colonic epithelium. Since the average length of the human colon is 160.5 cm and the average inner circumference of the colon is 6.2 cm, the inner surface epithelial area of the human colon has an average area of about 995 cm2, which includes 9,950,000 (close to 10 million) crypts. In the four tissue sections shown here, many of the intestinal glands have cells with a mitochondrial DNA mutation in the CCOI gene and appear mostly white, with their main color being the blue-gray staining of the nuclei. As seen in panel B, a portion of the stem cells of three crypts appear to have a mutation in CCOI, so that 40% to 50% of the cells arising from those stem cells form a white segment in the cross cut area. Overall, the percent of crypts deficient for CCOI is less than 1% before age 40, but then increases linearly with age. Colonic crypts deficient for CCOI in women reaches, on average, 18% in women and 23% in men by 80–84 years of age. Crypts of the colon can reproduce by fission, as seen in panel C, where a crypt is fissioning to form two crypts, and in panel B where at least one crypt appears to be fissioning. Most crypts deficient in CCOI are in clusters of crypts (clones of crypts) with two or more CCOI-deficient crypts adjacent to each other (see panel D). Mucosa About 150 of the many thousands of protein coding genes expressed in the large intestine, some are specific to the mucous membrane in different regions and include CEACAM7. Function The large intestine absorbs water and any remaining absorbable nutrients from the food before sending the indigestible matter to the rectum. The colon absorbs vitamins that are created by the colonic bacteria, such as thiamine, riboflavin, and vitamin K (especially important as the daily ingestion of vitamin K is not normally enough to maintain adequate blood coagulation). It also compacts feces, and stores fecal matter in the rectum until it can be discharged via the anus in defecation. The large intestine also secretes K+ and Cl-. Chloride secretion increases in cystic fibrosis. Recycling of various nutrients takes place in the colon. Examples include fermentation of carbohydrates, short chain fatty acids, and urea cycling. The appendix contains a small amount of mucosa-associated lymphoid tissue which gives the appendix an undetermined role in immunity. However, the appendix is known to be important in fetal life as it contains endocrine cells that release biogenic amines and peptide hormones important for homeostasis during early growth and development. By the time the chyme has reached this tube, most nutrients and 90% of the water have been absorbed by the body. Indeed, as demonstrated by the commonality of ileostomy procedures, it is possible for many people to live without large portions of their large intestine, or even without it completely. At this point only some electrolytes like sodium, magnesium, and chloride are left as well as indigestible parts of ingested food (e.g., a large part of ingested amylose, starch which has been shielded from digestion heretofore, and dietary fiber, which is largely indigestible carbohydrate in either soluble or insoluble form). As the chyme moves through the large intestine, most of the remaining water is removed, while the chyme is mixed with mucus and bacteria (known as gut flora), and becomes feces. The ascending colon receives fecal material as a liquid. The muscles of the colon then move the watery waste material forward and slowly absorb all the excess water, causing the stools to gradually solidify as they move along into the descending colon. The bacteria break down some of the fiber for their own nourishment and create acetate, propionate, and butyrate as waste products, which in turn are used by the cell lining of the colon for nourishment. No protein is made available. In humans, perhaps 10% of the undigested carbohydrate thus becomes available, though this may vary with diet; in other animals, including other apes and primates, who have proportionally larger colons, more is made available, thus permitting a higher portion of plant material in the diet. The large intestine produces no digestive enzymes — chemical digestion is completed in the small intestine before the chyme reaches the large intestine. The pH in the colon varies between 5.5 and 7 (slightly acidic to neutral). Standing gradient osmosis Water absorption at the colon typically proceeds against a transmucosal osmotic pressure gradient. The standing gradient osmosis is the reabsorption of water against the osmotic gradient in the intestines. Cells occupying the intestinal lining pump sodium ions into the intercellular space, raising the osmolarity of the intercellular fluid. This hypertonic fluid creates an osmotic pressure that drives water into the lateral intercellular spaces by osmosis via tight junctions and adjacent cells, which then in turn moves across the basement membrane and into the capillaries, while more sodium ions are pumped again into the intercellular fluid. Although water travels down an osmotic gradient in each individual step, overall, water usually travels against the osmotic gradient due to the pumping of sodium ions into the intercellular fluid. This allows the large intestine to absorb water despite the blood in capillaries being hypotonic compared to the fluid within the intestinal lumen. Gut flora The large intestine houses over 700 species of bacteria that perform a variety of functions, as well as fungi, protozoa, and archaea. Species diversity varies by geography and diet. The microbes in a human distal gut often number in the vicinity of 100 trillion, and can weigh around 200 grams (0.44 pounds). This mass of mostly symbiotic microbes has recently been called the latest human organ to be "discovered" or in other words, the "forgotten organ". The large intestine absorbs some of the products formed by the bacteria inhabiting this region. Undigested polysaccharides (fiber) are metabolized to short-chain fatty acids by bacteria in the large intestine and absorbed by passive diffusion. The bicarbonate that the large intestine secretes helps to neutralize the increased acidity resulting from the formation of these fatty acids. These bacteria also produce large amounts of vitamins, especially vitamin K and biotin (a B vitamin), for absorption into the blood. Although this source of vitamins, in general, provides only a small part of the daily requirement, it makes a significant contribution when dietary vitamin intake is low. An individual who depends on absorption of vitamins formed by bacteria in the large intestine may become vitamin-deficient if treated with antibiotics that inhibit the vitamin producing species of bacteria as well as the intended disease-causing bacteria. Other bacterial products include gas (flatus), which is a mixture of nitrogen and carbon dioxide, with small amounts of the gases hydrogen, methane, and hydrogen sulfide. Bacterial fermentation of undigested polysaccharides produces these. Some of the fecal odor is due to indoles, metabolized from the amino acid tryptophan. The normal flora is also essential in the development of certain tissues, including the cecum and lymphatics. They are also involved in the production of cross-reactive antibodies. These are antibodies produced by the immune system against the normal flora, that are also effective against related pathogens, thereby preventing infection or invasion. The two most prevalent phyla of the colon are Bacillota and Bacteroidota. The ratio between the two seems to vary widely as reported by the Human Microbiome Project. Bacteroides are implicated in the initiation of colitis and colon cancer. Bifidobacteria are also abundant, and are often described as 'friendly bacteria'. A mucus layer protects the large intestine from attacks from colonic commensal bacteria. Clinical significance Disease Following are the most common diseases or disorders of the colon: Colonoscopy Colonoscopy is the endoscopic examination of the large intestine and the distal part of the small bowel with a CCD camera or a fiber optic camera on a flexible tube passed through the anus. It can provide a visual diagnosis (e.g. ulceration, polyps) and grants the opportunity for biopsy or removal of suspected colorectal cancer lesions. Colonoscopy can remove polyps as small as one millimetre or less. Once polyps are removed, they can be studied with the aid of a microscope to determine if they are precancerous or not. It takes 15 years or fewer for a polyp to turn cancerous. Colonoscopy is similar to sigmoidoscopy—the difference being related to which parts of the colon each can examine. A colonoscopy allows an examination of the entire colon (1200–1500 mm in length). A sigmoidoscopy allows an examination of the distal portion (about 600 mm) of the colon, which may be sufficient because benefits to cancer survival of colonoscopy have been limited to the detection of lesions in the distal portion of the colon. A sigmoidoscopy is often used as a screening procedure for a full colonoscopy, often done in conjunction with a stool-based test such as a fecal occult blood test (FOBT), fecal immunochemical test (FIT), or multi-target stool DNA test (Cologuard) or blood-based test, SEPT9 DNA methylation test (Epi proColon). About 5% of these screened patients are referred to colonoscopy. Virtual colonoscopy, which uses 2D and 3D imagery reconstructed from computed tomography (CT) scans or from nuclear magnetic resonance (MR) scans, is also possible, as a totally non-invasive medical test, although it is not standard and still under investigation regarding its diagnostic abilities. Furthermore, virtual colonoscopy does not allow for therapeutic maneuvers such as polyp/tumour removal or biopsy nor visualization of lesions smaller than 5 millimeters. If a growth or polyp is detected using CT colonography, a standard colonoscopy would still need to be performed. Additionally, surgeons have lately been using the term pouchoscopy to refer to a colonoscopy of the ileo-anal pouch. Other animals The large intestine is truly distinct only in tetrapods, in which it is almost always separated from the small intestine by an ileocaecal valve. In most vertebrates, however, it is a relatively short structure running directly to the anus, although noticeably wider than the small intestine. Although the caecum is present in most amniotes, only in mammals does the remainder of the large intestine develop into a true colon. In some small mammals, the colon is straight, as it is in other tetrapods, but, in the majority of mammalian species, it is divided into ascending and descending portions; a distinct transverse colon is typically present only in primates. However, the taeniae coli and accompanying haustra are not found in either carnivorans or ruminants. The rectum of mammals (other than monotremes) is derived from the cloaca of other vertebrates, and is, therefore, not truly homologous with the "rectum" found in these species. In some fish, there is no true large intestine, but simply a short rectum connecting the end of the digestive part of the gut to the cloaca. In sharks, this includes a rectal gland that secretes salt to help the animal maintain osmotic balance with the seawater. The gland somewhat resembles a caecum in structure but is not a homologous structure. Additional images See also Colectomy Colonic ulcer Large intestine (Chinese medicine) References External links Digestive system Organs (anatomy)
Large intestine
[ "Biology" ]
5,228
[ "Digestive system", "Organ systems" ]
59,385
https://en.wikipedia.org/wiki/Amanita%20muscaria
Amanita muscaria, commonly known as the fly agaric or fly amanita, is a basidiomycete of the genus Amanita. It is a large white-gilled, white-spotted, and usually red mushroom. Despite its easily distinguishable features, A.muscaria is a fungus with several known variations, or subspecies. These subspecies are slightly different, some having yellow or white caps, but are all usually called fly agarics, most often recognizable by their notable white spots. Recent DNA fungi research, however, has shown that some mushrooms called "fly agaric" are in fact unique species, such as A.persicina (the peach-colored fly agaric). Native throughout the temperate and boreal regions of the Northern Hemisphere, A.muscaria has been unintentionally introduced to many countries in the Southern Hemisphere, generally as a symbiont with pine and birch plantations, and is now a true cosmopolitan species. It associates with various deciduous and coniferous trees. Although poisonous, death due to poisoning from A.muscaria ingestion is quite rare. Parboiling twice with water weakens its toxicity and breaks down the mushroom's psychoactive substances; it is eaten in parts of Europe, Asia, and North America. All A.muscaria varieties, but in particular A.muscaria var. muscaria, are noted for their hallucinogenic properties, with the main psychoactive constituents being muscimol and its neurotoxic precursor ibotenic acid. A local variety of the mushroom was used as an intoxicant and entheogen by the indigenous peoples of Siberia. Arguably the most iconic toadstool species, the fly agaric is one of the most recognizable fungi in the world, and is widely encountered in popular culture, including in video games—for example, the frequent use of a recognizable A.muscaria in the Mario franchise (e.g. its Super Mushroom power-up)—and television—for example, the houses in The Smurfs franchise. There have been cases of children admitted to hospitals after consuming this poisonous mushroom; the children may have been attracted to it because of its pop-culture associations. Taxonomy The name of the mushroom in many European languages is thought to derive from its use as an insecticide when sprinkled in milk. This practice has been recorded from Germanic- and Slavic-speaking parts of Europe, as well as the Vosges region and pockets elsewhere in France, and Romania. Albertus Magnus was the first to record it in his work De vegetabilibus some time before 1256, commenting "vocatur fungus muscarum, eo quod in lacte pulverizatus interficit muscas" ("it is called the fly mushroom because it is powdered in milk to kill flies"). The 16th-century Flemish botanist Carolus Clusius traced the practice of sprinkling it into milk to Frankfurt in Germany, while Carl Linnaeus, the "father of taxonomy", reported it from Småland in southern Sweden, where he had lived as a child. He described it in volume two of his Species Plantarum in 1753, giving it the name Agaricus muscarius, the specific epithet deriving from Latin musca meaning "fly". It gained its current name in 1783, when placed in the genus Amanita by Jean-Baptiste Lamarck, a name sanctioned in 1821 by the "father of mycology", Swedish naturalist Elias Magnus Fries. The starting date for all the mycota had been set by general agreement as January 1, 1821, the date of Fries's work, and so the full name was then Amanita muscaria (L.:Fr.) Hook. The 1987 edition of the International Code of Botanical Nomenclature changed the rules on the starting date and primary work for names of fungi, and names can now be considered valid as far back as May 1, 1753, the date of publication of Linnaeus's work. Hence, Linnaeus and Lamarck are now taken as the namers of Amanita muscaria (L.) Lam.. The English mycologist John Ramsbottom reported that Amanita muscaria was used for getting rid of bugs in England and Sweden, and bug agaric was an old alternative name for the species. French mycologist Pierre Bulliard reported having tried without success to replicate its fly-killing properties in his work (1784), and proposed a new binomial name Agaricus pseudo-aurantiacus because of this. One compound isolated from the fungus is 1,3-diolein (1,3-di(cis-9-octadecenoyl)glycerol), which attracts insects. It has been hypothesised that the flies intentionally seek out the fly agaric for its intoxicating properties. An alternative derivation proposes that the term fly- refers not to insects as such but rather the delirium resulting from consumption of the fungus. This is based on the medieval belief that flies could enter a person's head and cause mental illness. Several regional names appear to be linked with this connotation, meaning the "mad" or "fool's" version of the highly regarded edible mushroom Amanita caesarea. Hence there is "mad oriol" in Catalan, mujolo folo from Toulouse, from the Aveyron department in Southern France, from Trentino in Italy. A local dialect name in Fribourg in Switzerland is tsapi de diablhou, which translates as "Devil's hat". Classification Amanita muscaria is the type species of the genus. By extension, it is also the type species of Amanita subgenus Amanita, as well as section Amanita within this subgenus. Amanita subgenus Amanita includes all Amanita with inamyloid spores. Amanita section Amanita includes the species with patchy universal veil remnants, including a volva that is reduced to a series of concentric rings, and the veil remnants on the cap to a series of patches or warts. Most species in this group also have a bulbous base. Amanita section Amanita consists of A. muscaria and its close relatives, including A. pantherina (the panther cap), A. gemmata, A. farinosa, and A. xanthocephala. Modern fungal taxonomists have classified Amanita muscaria and its allies this way based on gross morphology and spore inamyloidy. Two recent molecular phylogenetic studies have confirmed this classification as natural. Description A large, conspicuous mushroom, Amanita muscaria is generally common and numerous where it grows, and is often found in groups with basidiocarps in all stages of development. Fly agaric fruiting bodies emerge from the soil looking like white eggs. After emerging from the ground, the cap is covered with numerous small white to yellow pyramid-shaped warts. These are remnants of the universal veil, a membrane that encloses the entire mushroom when it is still very young. Dissecting the mushroom at this stage reveals a characteristic yellowish layer of skin under the veil, which helps identification. As the fungus grows, the red colour appears through the broken veil and the warts become less prominent; they do not change in size, but are reduced relative to the expanding skin area. The cap changes from globose to hemispherical, and finally to plate-like and flat in mature specimens. Fully grown, the bright red cap is usually around in diameter, although larger specimens have been found. The red colour may fade after rain and in older mushrooms. The free gills are white, as is the spore print. The oval spores measure 9–13 by 6.5–9 μm; they do not turn blue with the application of iodine. The stipe is white, high by wide, and has the slightly brittle, fibrous texture typical of many large mushrooms. At the base is a bulb that bears universal veil remnants in the form of two to four distinct rings or ruffs. Between the basal universal veil remnants and gills are remnants of the partial veil (which covers the gills during development) in the form of a white ring. It can be quite wide and flaccid with age. There is generally no associated smell other than a mild earthiness. Although very distinctive in appearance, the fly agaric has been mistaken for other yellow to red mushroom species in the Americas, such as Armillaria cf. mellea and the edible A. basii—a Mexican species similar to A. caesarea of Europe. Poison control centres in the U.S. and Canada have become aware that (Spanish for 'yellow') is a common name for the A. caesarea-like species in Mexico. A. caesarea is distinguished by its entirely orange to red cap, which lacks the numerous white warty spots of the fly agaric (though these sometimes wash away during heavy rain). Furthermore, the stem, gills and ring of A. caesarea are bright yellow, not white. The volva is a distinct white bag, not broken into scales. In Australia, the introduced fly agaric may be confused with the native vermilion grisette (Amanita xanthocephala), which grows in association with eucalypts. The latter species generally lacks the white warts of A. muscaria and bears no ring. Additionally, immature button forms resemble puffballs. Controversy Amanita muscaria varies considerably in its morphology, and many authorities recognize several subspecies or varieties within the species. In The Agaricales in Modern Taxonomy, German mycologist Rolf Singer listed three subspecies, though without description: A. muscaria ssp. muscaria, A. muscaria ssp. americana, and A. muscaria ssp. flavivolvata. However, a 2006 molecular phylogenetic study of different regional populations of A. muscaria by mycologist József Geml and colleagues found three distinct clades within this species representing, roughly, Eurasian, Eurasian "subalpine", and North American populations. Specimens belonging to all three clades have been found in Alaska; this has led to the hypothesis that this was the centre of diversification for this species. The study also looked at four named varieties of the species: var. alba, var. flavivolvata, var. formosa (including var. guessowii), and var. regalis from both areas. All four varieties were found within both the Eurasian and North American clades, evidence that these morphological forms are polymorphisms rather than distinct subspecies or varieties. Further molecular study by Geml and colleagues published in 2008 show that these three genetic groups, plus a fourth associated with oak–hickory–pine forest in the southeastern United States and two more on Santa Cruz Island in California, are delineated from each other enough genetically to be considered separate species. Thus A. muscaria as it stands currently is, evidently, a species complex. The complex also includes at least three other closely related taxa that are currently regarded as species: A. breckonii is a buff-capped mushroom associated with conifers from the Pacific Northwest, and the brown-capped A. gioiosa and A. heterochroma from the Mediterranean Basin and from Sardinia respectively. Both of these last two are found with Eucalyptus and Cistus trees, and it is unclear whether they are native or introduced from Australia. Amanitaceae.org lists four varieties , but says that they will be segregated into their own taxa "in the near future". They are: Distribution and habitat A. muscaria is a cosmopolitan mushroom, native to conifer and deciduous woodlands throughout the temperate and boreal regions of the Northern Hemisphere, including higher elevations of warmer latitudes in regions such as Hindu Kush, the Mediterranean and also Central America. A recent molecular study proposes that it had an ancestral origin in the Siberian–Beringian region in the Tertiary period, before radiating outwards across Asia, Europe and North America. The season for fruiting varies in different climates: fruiting occurs in summer and autumn across most of North America, but later in autumn and early winter on the Pacific coast. This species is often found in similar locations to Boletus edulis, and may appear in fairy rings. Conveyed with pine seedlings, it has been widely transported into the southern hemisphere, including Australia, New Zealand, South Africa and South America, where it can be found in the Brazilian states of Paraná, São Paulo, Minas Gerais, Rio Grande do Sul. Ectomycorrhizal, A. muscaria forms symbiotic relationships with many trees, including pine, oak, spruce, fir, birch, and cedar. Commonly seen under introduced trees, A. muscaria is the fungal equivalent of a weed in New Zealand, Tasmania and Victoria, forming new associations with southern beech (Nothofagus). The species is also invading a rainforest in Australia, where it may be displacing the native species. It appears to be spreading northwards, with recent reports placing it near Port Macquarie on the New South Wales north coast. It was recorded under silver birch (Betula pendula) in Manjimup, Western Australia in 2010. Although it has apparently not spread to eucalypts in Australia, it has been recorded associating with them in Portugal. Commonly found throughout the great Southern region of western Australia, it is regularly found growing on Pinus radiata. Toxicity A. muscaria poisoning has occurred in young children and in people who ingested the mushrooms for a hallucinogenic experience, or who confused it with an edible species. A. muscaria contains several biologically active agents, at least one of which, muscimol, is known to be psychoactive. Ibotenic acid, a neurotoxin, serves as a prodrug to muscimol, with a small amount likely converting to muscimol after ingestion. An active dose in adults is approximately 6 mg muscimol or 30 to 60 mg ibotenic acid; this is typically about the amount found in one cap of Amanita muscaria. The amount and ratio of chemical compounds per mushroom varies widely from region to region and season to season, which can further confuse the issue. Spring and summer mushrooms have been reported to contain up to 10 times more ibotenic acid and muscimol than autumn fruitings. Deaths from A. muscaria have been reported in historical journal articles and newspaper reports, but with modern medical treatment, fatal poisoning from ingesting this mushroom is extremely rare. Many books list A. muscaria as deadly, but according to David Arora, this is an error that implies the mushroom is far more toxic than it is. Furthermore, The North American Mycological Association has stated that there were "no reliably documented cases of death from toxins in these mushrooms in the past 100 years". The active constituents of this species are water-soluble, and boiling and then discarding the cooking water at least partly detoxifies A. muscaria. Drying may increase potency, as the process facilitates the conversion of ibotenic acid to the more potent muscimol. According to some sources, once detoxified, the mushroom becomes edible. Patrick Harding describes the Sami custom of processing the fly agaric through reindeer. Pharmacology Muscarine, discovered in 1869, was long thought to be the active hallucinogenic agent in A. muscaria. Muscarine binds with muscarinic acetylcholine receptors leading to the excitation of neurons bearing these receptors. The levels of muscarine in Amanita muscaria are minute when compared with other poisonous fungi such as Inosperma erubescens, the small white Clitocybe species C. dealbata and C. rivulosa. The level of muscarine in A. muscaria is too low to play a role in the symptoms of poisoning. The major toxins involved in A. muscaria poisoning are muscimol (3-hydroxy-5-aminomethyl-1-isoxazole, an unsaturated cyclic hydroxamic acid) and the related amino acid ibotenic acid. Muscimol is the product of the decarboxylation (usually by drying) of ibotenic acid. Muscimol and ibotenic acid were discovered in the mid-20th century. Researchers in England, Japan, and Switzerland showed that the effects produced were due mainly to ibotenic acid and muscimol, not muscarine. These toxins are not distributed uniformly in the mushroom. Most are detected in the cap of the fruit, a moderate amount in the base, with the smallest amount in the stalk. Quite rapidly, between 20 and 90 minutes after ingestion, a substantial fraction of ibotenic acid is excreted unmetabolised in the urine of the consumer. Almost no muscimol is excreted when pure ibotenic acid is eaten, but muscimol is detectable in the urine after eating A. muscaria, which contains both ibotenic acid and muscimol. Ibotenic acid and muscimol are structurally related to each other and to two major neurotransmitters of the central nervous system: glutamic acid and GABA respectively. Ibotenic acid and muscimol act like these neurotransmitters, muscimol being a potent GABAA agonist, while ibotenic acid is an agonist of NMDA glutamate receptors and certain metabotropic glutamate receptors which are involved in the control of neuronal activity. It is these interactions which are thought to cause the psychoactive effects found in intoxication. Muscazone is another compound that has more recently been isolated from European specimens of the fly agaric. It is a product of the breakdown of ibotenic acid by ultraviolet radiation. Muscazone is of minor pharmacological activity compared with the other agents. Amanita muscaria and related species are known as effective bioaccumulators of vanadium; some species concentrate vanadium to levels of up to 400 times those typically found in plants. Vanadium is present in fruit-bodies as an organometallic compound called amavadine. The biological importance of the accumulation process is unknown. Symptoms Fly agarics are best known for the unpredictability of their effects. Depending on habitat and the amount ingested per body weight, effects can range from mild nausea and twitching to drowsiness, cholinergic crisis-like effects (low blood pressure, sweating and salivation), auditory and visual distortions, mood changes, euphoria, relaxation, ataxia, and loss of equilibrium (like with tetanus.) In cases of serious poisoning the mushroom causes delirium, somewhat similar in effect to anticholinergic poisoning (such as that caused by Datura stramonium), characterised by bouts of marked agitation with confusion, hallucinations, and irritability followed by periods of central nervous system depression. Seizures and coma may also occur in severe poisonings. Symptoms typically appear after around 30 to 90 minutes and peak within three hours, but certain effects can last for several days. In the majority of cases recovery is complete within 12 to 24 hours. The effect is highly variable between individuals, with similar doses potentially causing quite different reactions. Some people suffering intoxication have exhibited headaches up to ten hours afterwards. Retrograde amnesia and somnolence can result following recovery. Treatment Medical attention should be sought in cases of suspected poisoning. If the delay between ingestion and treatment is less than four hours, activated charcoal is given. Gastric lavage can be considered if the patient presents within one hour of ingestion. Inducing vomiting with syrup of ipecac is no longer recommended in any poisoning situation. There is no antidote, and supportive care is the mainstay of further treatment for intoxication. Though sometimes referred to as a deliriant and while muscarine was first isolated from A. muscaria and as such is its namesake, muscimol does not have action, either as an agonist or antagonist, at the muscarinic acetylcholine receptor site, and therefore atropine or physostigmine as an antidote is not recommended. If a patient is delirious or agitated, this can usually be treated by reassurance and, if necessary, physical restraints. A benzodiazepine such as diazepam or lorazepam can be used to control combativeness, agitation, muscular overactivity, and seizures. Only small doses should be used, as they may worsen the respiratory depressant effects of muscimol. Recurrent vomiting is rare, but if present may lead to fluid and electrolyte imbalances; intravenous rehydration or electrolyte replacement may be required. Serious cases may develop loss of consciousness or coma, and may need intubation and artificial ventilation. Hemodialysis can remove the toxins, although this intervention is generally considered unnecessary. With modern medical treatment the prognosis is typically good following supportive treatment. Uses Psychoactive The wide range of psychoactive effects have been variously described as depressant, sedative-hypnotic, psychedelic, dissociative, or deliriant; paradoxical effects such as stimulation may occur however. Perceptual phenomena such as synesthesia, macropsia, and micropsia may occur; the latter two effects may occur either simultaneously or alternatingly, as part of Alice in Wonderland syndrome, collectively known as dysmetropsia, along with related distortions pelopsia and teleopsia. Some users report lucid dreaming under the influence of its hypnotic effects. Unlike Psilocybe cubensis, A. muscaria cannot be commercially cultivated, due to its mycorrhizal relationship with the roots of pine trees. However, following the outlawing of psilocybin mushrooms in the United Kingdom in 2006, the sale of the still legal A. muscaria began increasing. Marija Gimbutas reported to R. Gordon Wasson that in remote areas of Lithuania, A. muscaria has been consumed at wedding feasts, in which mushrooms were mixed with vodka. She also reported that the Lithuanians used to export A. muscaria to the Sami in the Far North for use in shamanic rituals. The Lithuanian festivities are the only report that Wasson received of ingestion of fly agaric for religious use in Eastern Europe. Siberia A. muscaria was widely used as an entheogen by many of the indigenous peoples of Siberia. Its use was known among almost all of the Uralic-speaking peoples of western Siberia and the Paleosiberian-speaking peoples of the Russian Far East. There are only isolated reports of A. muscaria use among the Tungusic and Turkic peoples of central Siberia and it is believed that on the whole entheogenic use of A. muscaria was not practised by these peoples. In western Siberia, the use of A. muscaria was restricted to shamans, who used it as an alternative method of achieving a trance state. (Normally, Siberian shamans achieve trance by prolonged drumming and dancing.) In eastern Siberia, A. muscaria was used by both shamans and laypeople alike, and was used recreationally as well as religiously. In eastern Siberia, the shaman would take the mushrooms, and others would drink his urine. This urine, still containing psychoactive elements, may be more potent than the A. muscaria mushrooms with fewer negative effects such as sweating and twitching, suggesting that the initial user may act as a screening filter for other components in the mushroom. The Koryak of eastern Siberia have a story about the fly agaric (wapaq) which enabled Big Raven to carry a whale to its home. In the story, the deity Vahiyinin ("Existence") spat onto earth, and his spittle became the wapaq, and his saliva becomes the warts. After experiencing the power of the wapaq, Raven was so exhilarated that he told it to grow forever on earth so his children, the people, could learn from it. Among the Koryaks, one report said that the poor would consume the urine of the wealthy, who could afford to buy the mushrooms. It was reported that the local reindeer would often follow an individual intoxicated by the muscimol mushroom, and if said individual were to urinate in snow the reindeer would become similarly intoxicated and the Koryak people's would use the drunken state of the reindeer to more easily rope and hunt them. Recent rise in popularity As a result of a lack of regulation, the use of Amanita muscaria as a popular legal alternative to hallucinogens has grown exponentially in recent years. In 2024, Google searches for Amanita muscaria rose nearly 200% from the previous year, a trend that an article published in the American Journal of Preventative Medicine correlated with the sudden commercialization of Amanita muscaria products on the internet. While Amanita mushrooms are unscheduled in the United States, the sale of Amanita products exists in a legal gray area as they are listed as a poison by the FDA and are not approved to be used in dietary supplements, with some drawing comparisons to the controversial legal status of hemp-derived cannabinoids. A recent outbreak of poisonings and at least one death associated with products containing Amanita muscaria extracts has sparked debates regarding the regulatory status of Amanita mushrooms and their psychoactive constituents. These products often use misleading advertising, such as erroneous comparisons to Psilocybin mushrooms or simply not disclosing the inclusion of Amanita mushrooms on the packaging. Other reports and theories The Finnish historian T. I. Itkonen mentions that A. muscaria was once used among the Sámi peoples. Sorcerers in Inari would consume fly agarics with seven spots. In 1979, Said Gholam Mochtar and Hartmut Geerken published an article in which they claimed to have discovered a tradition of medicinal and recreational use of this mushroom among a Parachi-speaking group in Afghanistan. There are also unconfirmed reports of religious use of A. muscaria among two Subarctic Native American tribes. Ojibwa ethnobotanist Keewaydinoquay Peschel reported its use among her people, where it was known as (an abbreviation of the name (= "red-top mushroom"). This information was enthusiastically received by Wasson, although evidence from other sources was lacking. There is also one account of a Euro-American who claims to have been initiated into traditional Tlicho use of Amanita muscaria. Mycophilosopher Martijn Benders has proposed a novel evolutionary theory involving Amanita muscaria. In his book Amanita Muscaria – the Book of the Empress, Benders argues that a precursor of ibotenic acid, a compound found in the mushroom, was present in ancient seaweed and played a significant role in the evolution of life. According to this hypothesis, the compound influenced the twitching movements of early aquatic organisms, leading to the development of behaviors such as jumping onto land—a crucial step in the evolution of terrestrial species. The flying reindeer of Santa Claus, who is called Joulupukki in Finland, could symbolize the use of A. muscaria by Sámi shamans. However, Sámi scholars and the Sámi peoples themselves refute any connection between Santa Claus and Sámi history or culture."The story of Santa emerging from a Sámi shamanic tradition has a critical number of flaws," asserts Tim Frandy, assistant professor of Nordic Studies at the University of British Columbia and a member of the Sámi descendent community in North America. "The theory has been widely criticized by Sámi people as a stereotypical and problematic romanticized misreading of actual Sámi culture." Vikings The notion that Vikings used A. muscaria to produce their berserker rages was first suggested by the Swedish professor Samuel Ödmann in 1784. Ödmann based his theories on reports about the use of fly agaric among Siberian shamans. The notion has become widespread since the 19th century, but no contemporary sources mention this use or anything similar in their description of berserkers. Muscimol is generally a mild relaxant, but it can create a range of different reactions within a group of people. It is possible that it could make a person angry, or cause them to be "very jolly or sad, jump about, dance, sing or give way to great fright". Comparative analysis of symptoms have, however, since shown Hyoscyamus niger to be a better fit to the state that characterises the berserker rage. Soma In 1968, R. Gordon Wasson proposed that A. muscaria was the soma talked about in the Rigveda of India, a claim which received widespread publicity and popular support at the time. He noted that descriptions of Soma omitted any description of roots, stems or seeds, which suggested a mushroom, and used the adjective hári "dazzling" or "flaming" which the author interprets as meaning red. One line described men urinating Soma; this recalled the practice of recycling urine in Siberia. Soma is mentioned as coming "from the mountains", which Wasson interpreted as the mushroom having been brought in with the Aryan migrants from the north. Indian scholars Santosh Kumar Dash and Sachinanda Padhy pointed out that both eating of mushrooms and drinking of urine were proscribed, using as a source the Manusmṛti. In 1971, Vedic scholar John Brough from Cambridge University rejected Wasson's theory and noted that the language was too vague to determine a description of Soma. In his 1976 survey, Hallucinogens and Culture, anthropologist Peter T. Furst evaluated the evidence for and against the identification of the fly agaric mushroom as the Vedic Soma, concluding cautiously in its favour. Kevin Feeney and Trent Austin compared the references in the Vedas with the filtering mechanisms in the preparation of Amanita muscaria and published findings supporting the proposal that fly-agaric mushrooms could be a likely candidate for the sacrament. Other proposed candidates include Psilocybe cubensis, Peganum harmala, and Ephedra. Christianity Philologist, archaeologist, and Dead Sea Scrolls scholar John Marco Allegro postulated that early Christian theology was derived from a fertility cult revolving around the entheogenic consumption of A. muscaria in his 1970 book The Sacred Mushroom and the Cross. This theory has found little support by scholars outside the field of ethnomycology. The book was widely criticized by academics and theologians, including Sir Godfrey Driver, emeritus Professor of Semitic Philology at Oxford University and Henry Chadwick, the Dean of Christ Church, Oxford. Christian author John C. King wrote a detailed rebuttal of Allegro's theory in the 1970 book A Christian View of the Mushroom Myth; he notes that neither fly agarics nor their host trees are found in the Middle East, even though cedars and pines are found there, and highlights the tenuous nature of the links between biblical and Sumerian names coined by Allegro. He concludes that if the theory were true, the use of the mushroom must have been "the best kept secret in the world" as it was so well concealed for two thousand years. Fly trap Amanita muscaria is traditionally used for catching flies possibly due to its content of ibotenic acid and muscimol, which lead to its common name "fly agaric". Recently, an analysis of nine different methods for preparing A. muscaria for catching flies in Slovenia have shown that the release of ibotenic acid and muscimol did not depend on the solvent (milk or water) and that thermal and mechanical processing led to faster extraction of ibotenic acid and muscimol. Culinary The toxins in A. muscaria are water-soluble: parboiling A. muscaria fruit bodies can detoxify them and render them edible, although consumption of the mushroom as a food has never been widespread. The consumption of detoxified A. muscaria has been practiced in some parts of Europe (notably by Russian settlers in Siberia) since at least the 19th century, and likely earlier. The German physician and naturalist Georg Heinrich von Langsdorff wrote the earliest published account on how to detoxify this mushroom in 1823. In the late 19th century, the French physician Félix Archimède Pouchet was a populariser and advocate of A. muscaria consumption, comparing it to manioc, an important food source in tropical South America that must also be detoxified before consumption. Use of this mushroom as a food source also seems to have existed in North America. A classic description of this use of A. muscaria by an African-American mushroom seller in Washington, D.C., in the late 19th century is described by American botanist Frederick Vernon Coville. In this case, the mushroom, after parboiling, and soaking in vinegar, is made into a mushroom sauce for steak. It is also consumed as a food in parts of Japan. The most well-known current use as an edible mushroom is in Nagano Prefecture, Japan. There, it is primarily salted and pickled. A 2008 paper by food historian William Rubel and mycologist David Arora gives a history of consumption of A. muscaria as a food and describes detoxification methods. They advocate that Amanita muscaria be described in field guides as an edible mushroom, though accompanied by a description on how to detoxify it. The authors state that the widespread descriptions in field guides of this mushroom as poisonous is a reflection of cultural bias, as several other popular edible species, notably morels, are also toxic unless properly cooked. In culture The red-and-white spotted toadstool is a common image in many aspects of popular culture. Garden ornaments and children's picture books depicting gnomes and fairies, such as the Smurfs, often show fly agarics used as seats, or homes. Fly agarics have been featured in paintings since the Renaissance, albeit in a subtle manner. For instance, in Hieronymus Bosch's painting, The Garden of Earthly Delights, the mushroom can be seen on the left-hand panel of the work. In the Victorian era they became more visible, becoming the main topic of some fairy paintings. Two of the most famous uses of the mushroom are in the Mario franchise (specifically two of the Super Mushroom power-up items and the platforms in several stages which are based on a fly agaric), and the dancing mushroom sequence in the 1940 Disney film Fantasia. An account of the journeys of Philip von Strahlenberg to Siberia and his descriptions of the use of the mukhomor there was published in English in 1736. The drinking of urine of those who had consumed the mushroom was commented on by Anglo-Irish writer Oliver Goldsmith in his widely read 1762 novel, Citizen of the World. The mushroom had been identified as the fly agaric by this time. Other authors recorded the distortions of the size of perceived objects while intoxicated by the fungus, including naturalist Mordecai Cubitt Cooke in his books The Seven Sisters of Sleep and A Plain and Easy Account of British Fungi. This observation is thought to have formed the basis of the effects of eating the mushroom in the 1865 popular story Alice's Adventures in Wonderland. A hallucinogenic "scarlet toadstool" from Lappland is featured as a plot element in Charles Kingsley's 1866 novel Hereward the Wake based on the medieval figure of the same name. Thomas Pynchon's 1973 novel Gravity's Rainbow describes the fungus as a "relative of the poisonous Destroying angel" and presents a detailed description of a character preparing a cookie bake mixture from harvested Amanita muscaria. Fly agaric shamanism—in the context of a surviving Dionysian cult in the Peak District—is also explored in the 2003 novel Thursbitch by Alan Garner. See also List of Amanita species Legal status of psychoactive Amanita mushrooms References Works cited External links Webpages on Amanita species by Tulloss and Yang Zhuliang Amanita on erowid.org Aminita muscaria, Amanita pantherina and others (Group PIM G026) by IPCS INCHEM muscaria Entheogens Fungi of Asia Fungi of Europe Fungi of North America Oneirogens Poisonous fungi Psychoactive fungi Fungi described in 1753 Soma (drink) Taxa named by Carl Linnaeus Fungi of the United Kingdom Fungus species
Amanita muscaria
[ "Biology", "Environmental_science" ]
7,731
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
59,392
https://en.wikipedia.org/wiki/Surface%20anatomy
Surface anatomy (also called superficial anatomy and visual anatomy) is the study of the external features of the body of an animal. In birds, this is termed topography. Surface anatomy deals with anatomical features that can be studied by sight, without dissection. As such, it is a branch of gross anatomy, along with endoscopic and radiological anatomy. Surface anatomy is a descriptive science. In particular, in the case of human surface anatomy, these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion. In addition, the science of surface anatomy includes the theories and systems of body proportions and related artistic canons. The study of surface anatomy is the basis for depicting the human body in classical art. Some pseudo-sciences such as physiognomy, phrenology and palmistry rely on surface anatomy. Human surface anatomy Surface anatomy of the thorax Knowledge of the surface anatomy of the thorax (chest) is particularly important because it is one of the areas most frequently subjected to physical examination, like auscultation and percussion. In cardiology, Erb's point refers to the third intercostal space on the left sternal border where S2 heart sound is best auscultated. Some sources include the fourth left interspace. Human female breasts are located on the chest wall, most frequently between the second and sixth rib. Anatomical landmarks On the trunk of the body in the thoracic area, the shoulder in general is the acromial, while the curve of the shoulder is the deltoid. The back as a general area is the dorsum or dorsal area, and the lower back as the limbus or lumbar region. The shoulderblades are the scapular area and the breastbone is the sternal region. The abdominal area is the region between the chest and the pelvis. The breast is called the mamma or mammary, the armpit as the axilla and axillary, and the navel as the umbilicus and umbilical. The pelvis is the lower torso, between the abdomen and the thighs. The groin, where the thigh joins the trunk, are the inguen and inguinal area. The entire arm is referred to as the brachium and brachial, the front of the elbow as the antecubitis and antecubital, the back of the elbow as the olecranon or olecranal, the forearm as the antebrachium and antebrachial, the wrist as the carpus and carpal area, the hand as the manus and manual, the palm as the palma and palmar, the thumb as the pollex, and the fingers as the digits, phalanges, and phalangeal. The buttocks are the gluteus or gluteal region and the pubic area is the pubis. Anatomists divide the lower limb into the thigh (the part of the limb between the hip and the knee) and the leg (which refers only to the area of the limb between the knee and the ankle). The thigh is the femur and the femoral region. The kneecap is the patella and patellar while the back of the knee is the popliteus and popliteal area. The leg (between the knee and the ankle) is the crus and crural area, the lateral aspect of the leg is the peroneal area, and the calf is the sura and sural region. The ankle is the tarsus and tarsal, and the heel is the calcaneus or calcaneal. The foot is the pes and pedal region, and the sole of the foot the planta and plantar. As with the fingers, the toes are also called the digits, phalanges, and phalangeal area. The big toe is referred to as the hallux. List of features Following are lists of surface anatomical features in humans and other animals. Sorted roughly from head to tail, cranial to caudal. Homologues share a bullet point and are separated by commas. Subcomponents are nested. Class in which component occurs in italic. In humans In other animals Head Tentacle Cephalopoda Antler Crest Hood Horn Mane Eye Ear Snout Nose, Trunk Nostril Whiskers Beak Aves only, Mouth Lip not in Aves Philtrum Jaw not in Aves Gums not in Aves Teeth not in Aves, Tusk Tongue Throat Vocal sac Ranidae Vertebral column (extends dorsally) Thorax Udder, Mammary gland Gills Arm Mammalia, Amphibia, Fin Fish, Wing Aves Elbow Hand Fingers (Thumb: Primate) Knee Leg Foot Toe Hoof, Claw, Nail (anatomy), Nail (beak) Webbing Abdomen Pouch Marsupialia Gastro-genitourinary system Vulva (female) Placentalia Penis (male) Amniota Scrotum (male) Boreoeutheria Urogenital papillae Teleostei Cloaca Aves, Elasmobranchii, Reptilia, Amphibia, Monotremata, Sarcopterygii Anus Theria, Teleostei, Invertebrates Skin Vertebrata Feather Aves, Scale, Hair Mammalia, Fur Mammalia Shell Tail See also Anatomy Inspection (medicine) List of images in Gray's Anatomy: XII. Surface anatomy and Surface Markings Palpation Notes References Standring, Susan (2008) Gray's Anatomy: The Anatomical Basis of Clinical Practice, 39th Edition. . Human surface anatomy photos at pp. 947, 1406-1410 Figs. 56.3, 110.12, 110.13, 110.15, 110.22 Further reading Anatomy Human anatomy Human body Human surface anatomy
Surface anatomy
[ "Physics", "Biology" ]
1,222
[ "Anatomy", "Human body", "Physical objects", "Matter" ]
59,405
https://en.wikipedia.org/wiki/Initial%20and%20terminal%20objects
In category theory, a branch of mathematics, an initial object of a category is an object in such that for every object in , there exists precisely one morphism . The dual notion is that of a terminal object (also called terminal element): is terminal if for every object in there exists exactly one morphism . Initial objects are also called coterminal or universal, and terminal objects are also called final. If an object is both initial and terminal, it is called a zero object or null object. A pointed category is one with a zero object. A strict initial object is one for which every morphism into is an isomorphism. Examples The empty set is the unique initial object in Set, the category of sets. Every one-element set (singleton) is a terminal object in this category; there are no zero objects. Similarly, the empty space is the unique initial object in Top, the category of topological spaces and every one-point space is a terminal object in this category. In the category Rel of sets and relations, the empty set is the unique initial object, the unique terminal object, and hence the unique zero object. In the category of pointed sets (whose objects are non-empty sets together with a distinguished element; a morphism from to being a function with ), every singleton is a zero object. Similarly, in the category of pointed topological spaces, every singleton is a zero object. In Grp, the category of groups, any trivial group is a zero object. The trivial object is also a zero object in Ab, the category of abelian groups, Rng the category of pseudo-rings, R-Mod, the category of modules over a ring, and K-Vect, the category of vector spaces over a field. See Zero object (algebra) for details. This is the origin of the term "zero object". In Ring, the category of rings with unity and unity-preserving morphisms, the ring of integers Z is an initial object. The zero ring consisting only of a single element is a terminal object. In Rig, the category of rigs with unity and unity-preserving morphisms, the rig of natural numbers N is an initial object. The zero rig, which is the zero ring, consisting only of a single element is a terminal object. In Field, the category of fields, there are no initial or terminal objects. However, in the subcategory of fields of fixed characteristic, the prime field is an initial object. Any partially ordered set can be interpreted as a category: the objects are the elements of , and there is a single morphism from to if and only if . This category has an initial object if and only if has a least element; it has a terminal object if and only if has a greatest element. Cat, the category of small categories with functors as morphisms has the empty category, 0 (with no objects and no morphisms), as initial object and the terminal category, 1 (with a single object with a single identity morphism), as terminal object. In the category of schemes, Spec(Z), the prime spectrum of the ring of integers, is a terminal object. The empty scheme (equal to the prime spectrum of the zero ring) is an initial object. A limit of a diagram F may be characterised as a terminal object in the category of cones to F. Likewise, a colimit of F may be characterised as an initial object in the category of co-cones from F. In the category ChR of chain complexes over a commutative ring R, the zero complex is a zero object. In a short exact sequence of the form , the initial and terminal objects are the anonymous zero object. This is used frequently in cohomology theories. Properties Existence and uniqueness Initial and terminal objects are not required to exist in a given category. However, if they do exist, they are essentially unique. Specifically, if and are two different initial objects, then there is a unique isomorphism between them. Moreover, if is an initial object then any object isomorphic to is also an initial object. The same is true for terminal objects. For complete categories there is an existence theorem for initial objects. Specifically, a (locally small) complete category has an initial object if and only if there exist a set ( a proper class) and an -indexed family of objects of such that for any object of , there is at least one morphism for some . Equivalent formulations Terminal objects in a category may also be defined as limits of the unique empty diagram . Since the empty category is vacuously a discrete category, a terminal object can be thought of as an empty product (a product is indeed the limit of the discrete diagram , in general). Dually, an initial object is a colimit of the empty diagram and can be thought of as an empty coproduct or categorical sum. It follows that any functor which preserves limits will take terminal objects to terminal objects, and any functor which preserves colimits will take initial objects to initial objects. For example, the initial object in any concrete category with free objects will be the free object generated by the empty set (since the free functor, being left adjoint to the forgetful functor to Set, preserves colimits). Initial and terminal objects may also be characterized in terms of universal properties and adjoint functors. Let 1 be the discrete category with a single object (denoted by •), and let be the unique (constant) functor to 1. Then An initial object in is a universal morphism from • to . The functor which sends • to is left adjoint to U. A terminal object in is a universal morphism from to •. The functor which sends • to is right adjoint to . Relation to other categorical constructions Many natural constructions in category theory can be formulated in terms of finding an initial or terminal object in a suitable category. A universal morphism from an object to a functor can be defined as an initial object in the comma category . Dually, a universal morphism from to is a terminal object in . The limit of a diagram is a terminal object in , the category of cones to . Dually, a colimit of is an initial object in the category of cones from . A representation of a functor to Set is an initial object in the category of elements of . The notion of final functor (respectively, initial functor) is a generalization of the notion of final object (respectively, initial object). Other properties The endomorphism monoid of an initial or terminal object is trivial: . If a category has a zero object , then for any pair of objects and in , the unique composition is a zero morphism from to . References This article is based in part on PlanetMath's article on examples of initial and terminal objects. Limits (category theory) Objects (category theory)
Initial and terminal objects
[ "Mathematics" ]
1,432
[ "Objects (category theory)", "Mathematical structures", "Category theory", "Limits (category theory)" ]
59,407
https://en.wikipedia.org/wiki/Pea
Pea (pisum in Latin) is a pulse, vegetable or fodder crop, but the word often refers to the seed or sometimes the pod of this flowering plant species. Carl Linnaeus gave the species the scientific name Pisum sativum in 1753 (meaning cultivated pea). Some sources now treat it as Lathyrus oleraceus; however the need and justification for the change is disputed. Each pod contains several seeds (peas), which can have green or yellow cotyledons when mature. Botanically, pea pods are fruit, since they contain seeds and develop from the ovary of a (pea) flower. The name is also used to describe other edible seeds from the Fabaceae such as the pigeon pea (Cajanus cajan), the cowpea (Vigna unguiculata), the seeds from several species of Lathyrus and is used as a compound form for example Sturt's desert pea. Peas are annual plants, with a life cycle of one year. They are a cool-season crop grown in many parts of the world; planting can take place from winter to early summer depending on location. The average pea weighs between . The immature peas (and in snow peas the tender pod as well) are used as a vegetable, fresh, frozen or canned; varieties of the species typically called field peas are grown to produce dry peas like the split pea shelled from a matured pod. These are the basis of pease porridge and pea soup, staples of medieval cuisine; in Europe, consuming fresh immature green peas was an innovation of early modern cuisine. Description A pea is a most commonly green, occasionally golden yellow, or infrequently purple pod-shaped vegetable, widely grown as a cool-season vegetable crop. The seeds may be planted as soon as the soil temperature reaches , with the plants growing best at temperatures of . They do not thrive in the summer heat of warmer temperate and lowland tropical climates, but do grow well in cooler, high-elevation, tropical areas. Many cultivars reach maturity about 60 days after planting. Peas have both low-growing and vining cultivars. The vining cultivars grow thin tendrils from leaves that coil around any available support and can climb to be high. A traditional approach to supporting climbing peas is to thrust branches pruned from trees or other woody plants upright into the soil, providing a lattice for the peas to climb. Branches used in this fashion are called pea sticks or sometimes pea brush. Metal fences, twine, or netting supported by a frame are used for the same purpose. In dense plantings, peas give each other some measure of mutual support. Pea plants can self-pollinate. History The wild pea is restricted to the Mediterranean Basin and the Near East. The earliest archaeological finds of peas date from the late Neolithic era of current Syria, Anatolia, Israel, Iraq, Jordan and Greece. In Egypt, early finds date from –4400 BC in the Nile delta area, and from c. 3800–3600 BC in Upper Egypt. The pea was also present in Georgia in the 5th millennium BC. Farther east, the finds are younger. Peas were present in Afghanistan c. 2000 BC, in Harappan civilization around modern-day Pakistan and western- and northwestern India in 2250–1750 BC. In the second half of the 2nd millennium BC, this legume crop appears in the Ganges Basin and southern India. In early times, peas were grown mostly for their dry seeds. From plants growing wild in the Mediterranean Basin, constant selection since the Neolithic dawn of agriculture improved their yield. In the early 3rd century BC, Theophrastus mentions peas among the legumes that are sown late in the winter because of their tenderness. In the first century AD, Columella mentions them in De re rustica, when Roman legionaries still gathered wild peas from the sandy soils of Numidia and Judea to supplement their rations. In the Middle Ages, field peas are constantly mentioned, as they were the staple that kept famine at bay, as Charles the Good, count of Flanders, noted explicitly in 1124. Green "garden" peas, eaten immature and fresh, were an innovative luxury of Early Modern Europe. In England, the distinction between field peas and garden peas dates from the early 17th century: John Gerard and John Parkinson both mention garden peas. Sugar peas, which the French called , because they were eaten pods and all, were introduced to France from the market gardens of Holland in the time of Henri IV, through the French ambassador. Green peas were introduced from Genoa to the court of Louis XIV of France in January 1660, with some staged fanfare. A hamper of them was presented before the King. They were shelled by the Savoyan comte de Soissons, who had married a niece of Cardinal Mazarin. Little dishes of peas were then presented to the King, the Queen, Cardinal Mazarin and Monsieur, the king's brother. Immediately established and grown for earliness warmed with manure and protected under glass, they were still a luxurious delicacy in 1696, when Mme de Maintenon and Mme de Sevigné each reported that they were "a fashion, a fury". The world’s first sweet tasting pea was developed in the 18th century by amateur plant breeder Thomas Edward Knight of Downton, near Salisbury, England. Modern split peas, with their indigestible skins rubbed off, are a development of the later 19th century. The top producer of green peas – by far – is China with 12.2 million tons, followed by India (4.8 million tons), USA (0.31 million tons), France (0.23 million tons) and Egypt (0.15 million tons). United Kingdom, Pakistan, Algeria, Peru and Turkey complete the top 10. Etymology The term pea originates from the Latin word , which is the latinisation of the Greek (), neuter variant form of () 'pea'. It was adopted into English as the noun pease (plural peasen), as in pease pudding. However, by analogy with other plurals ending in -s, speakers began construing pease as a plural and constructing the singular form by dropping the -s, giving the term pea. This process is known as back-formation. Composition Nutrition Raw green peas are 79% water, 14% carbohydrates, 5% protein, and contain negligible fat (table). In a reference amount of , raw green peas supply of food energy, and are a rich source (20% or more of the Daily Value, DV) of vitamin C (48% DV), vitamin K, thiamine, and manganese, with several B vitamins and dietary minerals in moderate amounts (11–16% DV) (table). Genome The pea karyotype consists of seven chromosomes, five of which are acrocentric and two submetacentric. Despite its scientific popularity, its relatively large genome size (4.45Gb) made it challenging to sequence compared to other legumes such as Medicago truncatula and soybeans. The International Pea Genome Sequencing Consortium was formed to develop the first pea reference genome, and the draft assembly was officially announced in September 2019. It covers 88% of the genome (3.92Gb) and predicted 44,791 gene-coding sequences. The pea used for the assembly was the inbred French cultivar "Caméor". Varieties Garden peas There are many varieties (cultivars) of garden peas. Some of the most common varieties are listed here. PMR indicates some degree of powdery mildew resistance; afila types, also called semi-leafless, have clusters of tendrils instead of leaves. Unless otherwise noted these are so called dwarf varieties which grow to an average height of about 1m. Giving the vines support is recommended, but not required. Extra dwarf are suitable for container growing, reaching only about 25 cm. Tall varieties grow to about 2m with support required. Alaska, 55 days (smooth seeded) Tom Thumb / Half Pint, 55 days (heirloom, extra dwarf) Thomas Laxton (heirloom) / Laxton's Progress / Progress #9, 60–65 days Mr. Big, 60 days, 2000 AAS winner Little Marvel, 63 days, 1934 AAS winner Early Perfection, 65 days Kelvedon Wonder, 65 days, 1997 RHS AGM winner Sabre, 65 days, PMR Homesteader / Lincoln, 67 days (heirloom, known as Greenfeast in Australia and New Zealand) Miragreen, 68 days (tall climber) Serge, 68 days, PMR, afila Wando, 68 days Green Arrow, 70 days Recruit, 70 days, PMR, afila Tall Telephone / Alderman, 75 days (heirloom, tall climber) Edible-pod peas Some peas lack the tough membrane inside the pod wall and have tender edible pods. There are two main types: Snow peas have flat pods with thin pod walls. Pods and seeds are eaten when they are very young. Snap peas or sugar snap peas have rounded pods with thick pod walls. Pods and seeds are eaten before maturity. The name sugar pea can include both types or be synonymous with either snow peas or snap peas in different dictionaries. Likewise mangetout (; from , 'eat-all pea'). Snow peas and snap peas both belong to Macrocarpon Group, a cultivar group based on the variety Pisum sativum var. macrocarpum Ser. named in 1825. It was described as having very compressed non-leathery edible pods in the original publication. The scientific name Pisum sativum var. saccharatum Ser. is often misused for snow peas. The variety under this name was described as having sub-leathery and compressed-terete pods and a French name of petit pois. The description is inconsistent with the appearance of snow peas, and therefore botanists have replaced this name with Pisum sativum var. macrocarpum. Field peas The field pea is a type of pea sometimes called P. sativum subsp. arvense (L.) Asch. It is also known as dun (grey-brown) pea, Kapucijner pea, or Austrian winter pea, and is one of the oldest domesticated crops, cultivated for at least 7,000 years. Field peas are now grown in many countries for both human consumption and stockfeed. There are several cultivars and colors including blue, dun (brown), maple and white. This pea should not be confused with the cowpea (Vigna unguiculata) which is sometimes called the "field pea" in warmer climates. It is a climbing annual legume with weak, viny, and relatively succulent stems. Vines often are 4 to 5 feet (120 to 150 cm) long, but when grown alone, field pea's weak stems prevent it from growing more than 1.5 to 2 feet (45 to 60 cm) tall. Leaves have two leaflets and a tendril. Flowers are white, pink, or purple. Pods carry seeds that are large (4,000 seeds/lb), nearly spherical, and white, gray, green, or brown. The root system is relatively shallow and small, but well nodulated. The field pea is a cool-season legume crop that is grown on over 25 million acres worldwide. It has been an important grain legume crop for millennia, seeds showing domesticated characteristics dating from at least 7000 years ago have been found in archaeological sites around what is now Turkey. Field peas or "dry peas" are marketed as a dry, shelled product for either human or livestock food, unlike the garden pea, which is marketed as a fresh or canned vegetable. The major producing countries of field peas are Russia and China, followed by Canada, Europe, Australia and the United States. Europe, Australia, Canada and the United States raise over 4.5 million acres (18,000 km²) and are major exporters of peas. In 2002, there were approximately 300,000 acres (1,200 km²) of field peas grown in the United States. Uses Culinary In modern times peas are usually boiled or steamed, which breaks down the cell walls and makes them taste sweeter and the nutrients more bioavailable. Along with broad beans and lentils, these formed an important part of the diet of most people in the Middle East, North Africa and Europe during the Middle Ages. By the 17th and 18th centuries, it had become popular to eat peas "green", that is, while they are immature and right after they are picked. New cultivars of peas were developed by the English during this time, which became known as "garden" or "English" peas. The popularity of green peas spread to North America. Thomas Jefferson grew more than 30 cultivars of peas on his estate. With the invention of canning, peas were one of the first vegetables to be canned. Fresh peas are often eaten boiled and flavored with butter and/or spearmint as a side dish vegetable. Salt and pepper are also commonly added to peas when served. Fresh peas are also used in pot pies, salads and casseroles. Pod peas (snow peas and snap peas) are used in stir-fried dishes, particularly those in American Chinese cuisine. Pea pods do not keep well once picked, and if not used quickly, are best preserved by drying, canning or freezing within a few hours of harvest. In India, fresh peas are used in various dishes such as aloo matar (curried potatoes with peas) or mattar paneer (paneer cheese with peas), though they can be substituted with frozen peas as well. Peas are also eaten raw, as they are sweet when fresh off the bush. Green peas known as hasiru batani in Kannada are used to make curry and gasi. Split peas are also used to make dal, particularly in Guyana, and Trinidad, where there is a significant population of Indians. Dried peas are often made into a soup or simply eaten on their own. In Japan, China, Taiwan and some Southeast Asian countries, including Thailand, the Philippines and Malaysia, peas are roasted and salted, and eaten as snacks. In the Philippines, peas, while still in their pods, are a common ingredient in viands and pansit. In the UK, dried yellow or green split peas are used to make pease pudding (or "pease porridge"), a traditional dish. In North America, a similarly traditional dish is split pea soup. Pea soup is eaten in many other parts of the world, including northern Europe, parts of middle Europe, Russia, Iran, Iraq and India. In Chinese cuisine, the tender new growth [leaves and stem] (豆苗; ) are commonly used in stir-fries. Much like picking the leaves for tea, the farmers pick the tips off of the pea plant. In Greece, Tunisia, Turkey, Cyprus, and other parts of the Mediterranean, peas are made into a stew with lamb and potatoes. In Hungary and Serbia, pea soup is often served with dumplings and spiced with hot paprika. In the United Kingdom, dried, rehydrated and mashed marrowfat peas, or cooked green split peas, known as mushy peas, are popular, originally in the north of England, but now ubiquitously, and especially as an accompaniment to fish and chips or meat pies, particularly in fish and chip shops. Sodium bicarbonate is sometimes added to soften the peas. In 2005, a poll of 2,000 people revealed the pea to be Britain's seventh favourite culinary vegetable. Processed peas are mature peas which have been dried, soaked and then heat treated (processed) to prevent spoilage—in the same manner as pasteurizing. Cooked peas are sometimes sold dried and coated with wasabi, salt, or other spices. In North America pea milk is produced and sold as an alternative to cow milk for a variety of reasons. Pea sprouts In East Asia, pea sprouts or shoots (; ) were once dedicated cuisine when the plant was less highly available. Today, when the plant can be easily grown, fresh pea shoots are available in supermarkets or may be grown at home. Manufacturing Frozen peas In order to freeze and preserve peas, they must first be grown, picked, and shelled. Usually, the more tender the peas are, the more likely that they will be used in the final product. The peas must be put through the process of freezing shortly after being picked so that they do not spoil too soon. Once the peas have been selected, they are placed in ice water and allowed to cool. After, they are sprayed with water to remove any residual dirt or dust that may remain on them. The next step is blanching. The peas are boiled for a few minutes to remove any enzymes that may shorten their shelf life. They are then cooled and removed from the water. The final step is the actual freezing to produce the final product. This step may vary considerably; some companies freeze their peas by air blast freezing, where the vegetables are put through a tunnel at high speeds and frozen by cold air. Finally, the peas are packaged and shipped out for retail sale. Science In the mid-19th century, Austrian monk Gregor Mendel's observations of pea pods led to the principles of Mendelian genetics, the foundation of modern genetics. He ended up growing and examining about 28,000 pea plants in the course of his experiments. Mendel chose peas for his experiments because he could grow them easily, pure-bred strains were readily available, and the structure of the flowers protect them from cross-pollination, and cross pollination was easy. Mendel cross-bred tall and dwarf pea plants, green and yellow peas, purple and white flowers, wrinkled and smooth peas, and a few other traits. He then observed the resulting offspring. In each of these cases, one trait is dominant and all the offspring, or Filial-1 (abbreviated F1) generation, showed the dominant trait. Then he allowed the F1 generation to self pollinate and observed their offspring, the Filial-2 (abbreviated F2) generation. The F2 plants had the dominant trait in approximately a 3:1 ratio. He studied later generations of self pollinated plants, and performed crosses to determine the nature of the pollen and egg cells. Mendel reasoned that each parent had a 'vote' in the appearance of the offspring, and the non-dominant, or recessive, trait appeared only when it was inherited from both parents. He did further experiments that showed each trait is separately inherited. Unwittingly, Mendel had solved a major problem with Charles Darwin's theory of evolution: how new traits were preserved and not blended back into the population, a question Darwin himself did not answer. Mendel's work was published in an obscure Austrian journal and was not rediscovered until about 1900. Potential for adverse effects Some people experience allergic reactions to peas, as well as lentils, with vicilin or convicilin as the most common allergens. Favism, or Fava-bean-ism, is a genetic deficiency of the enzyme glucose-6-phosphate dehydrogenase that affects Jews, other Middle Eastern Semitic peoples, and other descendants of the Mediterranean coastal regions. In this condition, the toxic reaction to eating most, if not all, beans is hemolytic anemia, and in severe cases, the released circulating free hemoglobin causes acute kidney injury. Nitrogen fixation Peas, like many legumes, contain symbiotic bacteria called Rhizobia within root nodules of their root systems. These bacteria have the special ability to fix nitrogen from atmospheric, molecular nitrogen () into ammonia (). The chemical reaction is: Ammonia is then converted to another form, ammonium (), usable by (some) plants by the following reaction: The root nodules of peas and other legumes are sources of nitrogen that they can use to make amino acids, constituents of proteins. Hence, legumes are good sources of plant protein. When a pea plant dies in the field, for example following the harvest, all of its remaining nitrogen, incorporated into amino acids inside the remaining plant parts, is released back into the soil. In the soil, the amino acids are converted to nitrate (), that is available to other plants, thereby serving as fertilizer for future crops. Cultivation Grading Pea grading involves sorting peas by size, in which the smallest peas are graded as the highest quality for their tenderness. Brines may be used, in which peas are floated, from which their density can be determined. Pests and diseases A variety of diseases affect peas through a number of pathogens, including insects, viruses, bacteria and fungi. In particular, virus disease of peas has worldwide economic importance. Additionally, insects such as the pea leaf weevil (Sitona lineatus) can damage peas and other pod fruits. The pea leaf weevil is native to Europe, but has spread to other places such as Alberta, Canada. They are about — long and are distinguishable by three light-coloured stripes running length-wise down the thorax. The weevil larvae feed on the root nodules of pea plants, which are essential to the plants' supply of nitrogen, and thus diminish leaf and stem growth. Adult weevils feed on the leaves and create a notched, "c-shaped" appearance on the outside of the leaves. The pea moth can be a serious pest producing caterpillars the resemble small white maggots in the pea-pods. The caterpillars eat the developing peas making them unsightly and unsuitable for culinary use. Prior to the use of modern insecticides, pea moth caterpillars were a very common sight in pea pods. See also Black-eyed pea Black pea Chickpea Dixie lee pea Sweet pea Cowpea Pea moth References Bibliography European Association for Grain Legume Research (AEP). Pea. https://web.archive.org/web/20061017214408/http://www.grainlegumes.com/default.asp?id_biblio=52 . Hernández Bermejo, J. E. & León, J., (1992). Neglected crops: 1492 from a different perspective, Food and Agricultural Organization of the United Nations (FAO) Contents Muehlbauer, F. J. and Tullu, A., (1997). Pisum sativum L. Purdue University. Pea Oelke, E. A., Oplinger E. S., et al. (1991). Dry Field Pea. University of Wisconsin.Dry Field Pea External links Sorting Pisum names USDA plant profile Foodcomp Edible legumes oleraceus Fruit vegetables Plants described in 1753 Plant models Taxa named by Carl Linnaeus Founder crops
Pea
[ "Biology" ]
4,758
[ "Model organisms", "Plant models" ]
59,414
https://en.wikipedia.org/wiki/Nitrogen%20cycle
The nitrogen cycle is the biogeochemical cycle by which nitrogen is converted into multiple chemical forms as it circulates among atmospheric, terrestrial, and marine ecosystems. The conversion of nitrogen can be carried out through both biological and physical processes. Important processes in the nitrogen cycle include fixation, ammonification, nitrification, and denitrification. The majority of Earth's atmosphere (78%) is atmospheric nitrogen, making it the largest source of nitrogen. However, atmospheric nitrogen has limited availability for biological use, leading to a scarcity of usable nitrogen in many types of ecosystems. The nitrogen cycle is of particular interest to ecologists because nitrogen availability can affect the rate of key ecosystem processes, including primary production and decomposition. Human activities such as fossil fuel combustion, use of artificial nitrogen fertilizers, and release of nitrogen in wastewater have dramatically altered the global nitrogen cycle. Human modification of the global nitrogen cycle can negatively affect the natural environment system and also human health. Processes Nitrogen is present in the environment in a wide variety of chemical forms including organic nitrogen, ammonium (), nitrite (), nitrate (), nitrous oxide (), nitric oxide (NO) or inorganic nitrogen gas (). Organic nitrogen may be in the form of a living organism, humus or in the intermediate products of organic matter decomposition. The processes in the nitrogen cycle is to transform nitrogen from one form to another. Many of those processes are carried out by microbes, either in their effort to harvest energy or to accumulate nitrogen in a form needed for their growth. For example, the nitrogenous wastes in animal urine are broken down by nitrifying bacteria in the soil to be used by plants. The diagram alongside shows how these processes fit together to form the nitrogen cycle. Nitrogen fixation The conversion of nitrogen gas () into nitrates and nitrites through atmospheric, industrial and biological processes is called nitrogen fixation. Atmospheric nitrogen must be processed, or "fixed", into a usable form to be taken up by plants. Between 5 and 10 billion kg per year are fixed by lightning strikes, but most fixation is done by free-living or symbiotic bacteria known as diazotrophs. These bacteria have the nitrogenase enzyme that combines gaseous nitrogen with hydrogen to produce ammonia, which is converted by the bacteria into other organic compounds. Most biological nitrogen fixation occurs by the activity of molybdenum (Mo)-nitrogenase, found in a wide variety of bacteria and some Archaea. Mo-nitrogenase is a complex two-component enzyme that has multiple metal-containing prosthetic groups. An example of free-living bacteria is Azotobacter. Symbiotic nitrogen-fixing bacteria such as Rhizobium usually live in the root nodules of legumes (such as peas, alfalfa, and locust trees). Here they form a mutualistic relationship with the plant, producing ammonia in exchange for carbohydrates. Because of this relationship, legumes will often increase the nitrogen content of nitrogen-poor soils. A few non-legumes can also form such symbioses. Today, about 30% of the total fixed nitrogen is produced industrially using the Haber-Bosch process, which uses high temperatures and pressures to convert nitrogen gas and a hydrogen source (natural gas or petroleum) into ammonia. Assimilation Plants can absorb nitrate or ammonium from the soil by their root hairs. If nitrate is absorbed, it is first reduced to nitrite ions and then ammonium ions for incorporation into amino acids, nucleic acids, and chlorophyll. In plants that have a symbiotic relationship with rhizobia, some nitrogen is assimilated in the form of ammonium ions directly from the nodules. It is now known that there is a more complex cycling of amino acids between Rhizobia bacteroids and plants. The plant provides amino acids to the bacteroids so ammonia assimilation is not required and the bacteroids pass amino acids (with the newly fixed nitrogen) back to the plant, thus forming an interdependent relationship. While many animals, fungi, and other heterotrophic organisms obtain nitrogen by ingestion of amino acids, nucleotides, and other small organic molecules, other heterotrophs (including many bacteria) are able to utilize inorganic compounds, such as ammonium as sole N sources. Utilization of various N sources is carefully regulated in all organisms. Ammonification When a plant or animal dies or an animal expels waste, the initial form of nitrogen is organic. Bacteria or fungi convert the organic nitrogen within the remains back into ammonium (), a process called ammonification or mineralization. Enzymes involved are: GS: Gln Synthetase (cytosolic & plastic) GOGAT: Glu 2-oxoglutarate aminotransferase (Ferredoxin & NADH-dependent) GDH: Glu Dehydrogenase: Minor role in ammonium assimilation. Important in amino acid catabolism. Nitrification The conversion of ammonium to nitrate is performed primarily by soil-living bacteria and other nitrifying bacteria. In the primary stage of nitrification, the oxidation of ammonium () is performed by bacteria such as the Nitrosomonas species, which converts ammonia to nitrites (). Other bacterial species such as Nitrobacter, are responsible for the oxidation of the nitrites () into nitrates (). It is important for the ammonia () to be converted to nitrates or nitrites because ammonia gas is toxic to plants. Due to their very high solubility and because soils are highly unable to retain anions, nitrates can enter groundwater. Elevated nitrate in groundwater is a concern for drinking water use because nitrate can interfere with blood-oxygen levels in infants and cause methemoglobinemia or blue-baby syndrome. Where groundwater recharges stream flow, nitrate-enriched groundwater can contribute to eutrophication, a process that leads to high algal population and growth, especially blue-green algal populations. While not directly toxic to fish life, like ammonia, nitrate can have indirect effects on fish if it contributes to this eutrophication. Nitrogen has contributed to severe eutrophication problems in some water bodies. Since 2006, the application of nitrogen fertilizer has been increasingly controlled in Britain and the United States. This is occurring along the same lines as control of phosphorus fertilizer, restriction of which is normally considered essential to the recovery of eutrophied waterbodies. Denitrification Denitrification is the reduction of nitrates back into nitrogen gas (), completing the nitrogen cycle. This process is performed by bacterial species such as Pseudomonas and Paracoccus, under anaerobic conditions. They use the nitrate as an electron acceptor in the place of oxygen during respiration. These facultatively (meaning optionally) anaerobic bacteria can also live in aerobic conditions. Denitrification happens in anaerobic conditions e.g. waterlogged soils. The denitrifying bacteria use nitrates in the soil to carry out respiration and consequently produce nitrogen gas, which is inert and unavailable to plants. Denitrification occurs in free-living microorganisms as well as obligate symbionts of anaerobic ciliates. Dissimilatory nitrate reduction to ammonium Dissimilatory nitrate reduction to ammonium (DNRA), or nitrate/nitrite ammonification, is an anaerobic respiration process. Microbes which undertake DNRA oxidise organic matter and use nitrate as an electron acceptor, reducing it to nitrite, then ammonium (). Both denitrifying and nitrate ammonification bacteria will be competing for nitrate in the environment, although DNRA acts to conserve bioavailable nitrogen as soluble ammonium rather than producing dinitrogen gas. Anaerobic ammonia oxidation The ANaerobic AMMonia OXidation process is also known as the ANAMMOX process, an abbreviation coined by joining the first syllables of each of these three words. This biological process is a redox comproportionation reaction, in which ammonia (the reducing agent giving electrons) and nitrite (the oxidizing agent accepting electrons) transfer three electrons and are converted into one molecule of diatomic nitrogen () gas and two water molecules. This process makes up a major proportion of nitrogen conversion in the oceans. The stoichiometrically balanced formula for the ANAMMOX chemical reaction can be written as following, where an ammonium ion includes the ammonia molecule, its conjugated base: (ΔG° = ). This an exergonic process (here also an exothermic reaction) releasing energy, as indicated by the negative value of ΔG°, the difference in Gibbs free energy between the products of reaction and the reagents. Other processes Though nitrogen fixation is the primary source of plant-available nitrogen in most ecosystems, in areas with nitrogen-rich bedrock, the breakdown of this rock also serves as a nitrogen source. Nitrate reduction is also part of the iron cycle, under anoxic conditions Fe(II) can donate an electron to and is oxidized to Fe(III) while is reduced to , and depending on the conditions and microbial species involved. The fecal plumes of cetaceans also act as a junction in the marine nitrogen cycle, concentrating nitrogen in the epipelagic zones of ocean environments before its dispersion through various marine layers, ultimately enhancing oceanic primary productivity. Marine nitrogen cycle The nitrogen cycle is an important process in the ocean as well. While the overall cycle is similar, there are different players and modes of transfer for nitrogen in the ocean. Nitrogen enters the water through the precipitation, runoff, or as from the atmosphere. Nitrogen cannot be utilized by phytoplankton as so it must undergo nitrogen fixation which is performed predominately by cyanobacteria. Without supplies of fixed nitrogen entering the marine cycle, the fixed nitrogen would be used up in about 2000 years. Phytoplankton need nitrogen in biologically available forms for the initial synthesis of organic matter. Ammonia and urea are released into the water by excretion from plankton. Nitrogen sources are removed from the euphotic zone by the downward movement of the organic matter. This can occur from sinking of phytoplankton, vertical mixing, or sinking of waste of vertical migrators. The sinking results in ammonia being introduced at lower depths below the euphotic zone. Bacteria are able to convert ammonia to nitrite and nitrate but they are inhibited by light so this must occur below the euphotic zone. Ammonification or Mineralization is performed by bacteria to convert organic nitrogen to ammonia. Nitrification can then occur to convert the ammonium to nitrite and nitrate. Nitrate can be returned to the euphotic zone by vertical mixing and upwelling where it can be taken up by phytoplankton to continue the cycle. can be returned to the atmosphere through denitrification. Ammonium is thought to be the preferred source of fixed nitrogen for phytoplankton because its assimilation does not involve a redox reaction and therefore requires little energy. Nitrate requires a redox reaction for assimilation but is more abundant so most phytoplankton have adapted to have the enzymes necessary to undertake this reduction (nitrate reductase). There are a few notable and well-known exceptions that include most Prochlorococcus and some Synechococcus that can only take up nitrogen as ammonium. The nutrients in the ocean are not uniformly distributed. Areas of upwelling provide supplies of nitrogen from below the euphotic zone. Coastal zones provide nitrogen from runoff and upwelling occurs readily along the coast. However, the rate at which nitrogen can be taken up by phytoplankton is decreased in oligotrophic waters year-round and temperate water in the summer resulting in lower primary production. The distribution of the different forms of nitrogen varies throughout the oceans as well. Nitrate is depleted in near-surface water except in upwelling regions. Coastal upwelling regions usually have high nitrate and chlorophyll levels as a result of the increased production. However, there are regions of high surface nitrate but low chlorophyll that are referred to as HNLC (high nitrogen, low chlorophyll) regions. The best explanation for HNLC regions relates to iron scarcity in the ocean, which may play an important part in ocean dynamics and nutrient cycles. The input of iron varies by region and is delivered to the ocean by dust (from dust storms) and leached out of rocks. Iron is under consideration as the true limiting element to ecosystem productivity in the ocean. Ammonium and nitrite show a maximum concentration at 50–80 m (lower end of the euphotic zone) with decreasing concentration below that depth. This distribution can be accounted for by the fact that nitrite and ammonium are intermediate species. They are both rapidly produced and consumed through the water column. The amount of ammonium in the ocean is about 3 orders of magnitude less than nitrate. Between ammonium, nitrite, and nitrate, nitrite has the fastest turnover rate. It can be produced during nitrate assimilation, nitrification, and denitrification; however, it is immediately consumed again. New vs. regenerated nitrogen Nitrogen entering the euphotic zone is referred to as new nitrogen because it is newly arrived from outside the productive layer. The new nitrogen can come from below the euphotic zone or from outside sources. Outside sources are upwelling from deep water and nitrogen fixation. If the organic matter is eaten, respired, delivered to the water as ammonia, and re-incorporated into organic matter by phytoplankton it is considered recycled/regenerated production. New production is an important component of the marine environment. One reason is that only continual input of new nitrogen can determine the total capacity of the ocean to produce a sustainable fish harvest. Harvesting fish from regenerated nitrogen areas will lead to a decrease in nitrogen and therefore a decrease in primary production. This will have a negative effect on the system. However, if fish are harvested from areas of new nitrogen the nitrogen will be replenished. Future acidification As illustrated by the diagram on the right, additional carbon dioxide () is absorbed by the ocean and reacts with water, carbonic acid () is formed and broken down into both bicarbonate () and hydrogen () ions (gray arrow), which reduces bioavailable carbonate () and decreases ocean pH (black arrow). This is likely to enhance nitrogen fixation by diazotrophs (gray arrow), which utilize ions to convert nitrogen into bioavailable forms such as ammonia () and ammonium ions (). However, as pH decreases, and more ammonia is converted to ammonium ions (gray arrow), there is less oxidation of ammonia to nitrite (NO), resulting in an overall decrease in nitrification and denitrification (black arrows). This in turn would lead to a further build-up of fixed nitrogen in the ocean, with the potential consequence of eutrophication. Gray arrows represent an increase while black arrows represent a decrease in the associated process. Human influences on the nitrogen cycle As a result of extensive cultivation of legumes (particularly soy, alfalfa, and clover), growing use of the Haber–Bosch process in the production of chemical fertilizers, and pollution emitted by vehicles and industrial plants, human beings have more than doubled the annual transfer of nitrogen into biologically available forms. In addition, humans have significantly contributed to the transfer of nitrogen trace gases from Earth to the atmosphere and from the land to aquatic systems. Human alterations to the global nitrogen cycle are most intense in developed countries and in Asia, where vehicle emissions and industrial agriculture are highest. Generation of Nr, reactive nitrogen, has increased over 10 fold in the past century due to global industrialisation. This form of nitrogen follows a cascade through the biosphere via a variety of mechanisms, and is accumulating as the rate of its generation is greater than the rate of denitrification. Nitrous oxide () has risen in the atmosphere as a result of agricultural fertilization, biomass burning, cattle and feedlots, and industrial sources. has deleterious effects in the stratosphere, where it breaks down and acts as a catalyst in the destruction of atmospheric ozone. Nitrous oxide is also a greenhouse gas and is currently the third largest contributor to global warming, after carbon dioxide and methane. While not as abundant in the atmosphere as carbon dioxide, it is, for an equivalent mass, nearly 300 times more potent in its ability to warm the planet. Ammonia () in the atmosphere has tripled as the result of human activities. It is a reactant in the atmosphere, where it acts as an aerosol, decreasing air quality and clinging to water droplets, eventually resulting in nitric acid (HNO3) that produces acid rain. Atmospheric ammonia and nitric acid also damage respiratory systems. The very high temperature of lightning naturally produces small amounts of , , and , but high-temperature combustion has contributed to a 6- or 7-fold increase in the flux of to the atmosphere. Its production is a function of combustion temperature - the higher the temperature, the more is produced. Fossil fuel combustion is a primary contributor, but so are biofuels and even the burning of hydrogen. However, the rate that hydrogen is directly injected into the combustion chambers of internal combustion engines can be controlled to prevent the higher combustion temperatures that produce . Ammonia and nitrous oxides actively alter atmospheric chemistry. They are precursors of tropospheric (lower atmosphere) ozone production, which contributes to smog and acid rain, damages plants and increases nitrogen inputs to ecosystems. Ecosystem processes can increase with nitrogen fertilization, but anthropogenic input can also result in nitrogen saturation, which weakens productivity and can damage the health of plants, animals, fish, and humans. Decreases in biodiversity can also result if higher nitrogen availability increases nitrogen-demanding grasses, causing a degradation of nitrogen-poor, species-diverse heathlands. Consequence of human modification of the nitrogen cycle Impacts on natural systems Increasing levels of nitrogen deposition is shown to have several adverse effects on both terrestrial and aquatic ecosystems. Nitrogen gases and aerosols can be directly toxic to certain plant species, affecting the aboveground physiology and growth of plants near large point sources of nitrogen pollution. Changes to plant species may also occur as nitrogen compound accumulation increases availability in a given ecosystem, eventually changing the species composition, plant diversity, and nitrogen cycling. Ammonia and ammonium – two reduced forms of nitrogen – can be detrimental over time due to increased toxicity toward sensitive species of plants, particularly those that are accustomed to using nitrate as their source of nitrogen, causing poor development of their roots and shoots. Increased nitrogen deposition also leads to soil acidification, which increases base cation leaching in the soil and amounts of aluminum and other potentially toxic metals, along with decreasing the amount of nitrification occurring and increasing plant-derived litter. Due to the ongoing changes caused by high nitrogen deposition, an environment's susceptibility to ecological stress and disturbance – such as pests and pathogens – may increase, thus making it less resilient to situations that otherwise would have little impact on its long-term vitality. Additional risks posed by increased availability of inorganic nitrogen in aquatic ecosystems include water acidification; eutrophication of fresh and saltwater systems; and toxicity issues for animals, including humans. Eutrophication often leads to lower dissolved oxygen levels in the water column, including hypoxic and anoxic conditions, which can cause death of aquatic fauna. Relatively sessile benthos, or bottom-dwelling creatures, are particularly vulnerable because of their lack of mobility, though large fish kills are not uncommon. Oceanic dead zones near the mouth of the Mississippi in the Gulf of Mexico are a well-known example of algal bloom-induced hypoxia. The New York Adirondack Lakes, Catskills, Hudson Highlands, Rensselaer Plateau and parts of Long Island display the impact of nitric acid rain deposition, resulting in the killing of fish and many other aquatic species. Ammonia () is highly toxic to fish, and the level of ammonia discharged from wastewater treatment facilities must be closely monitored. Nitrification via aeration before discharge is often desirable to prevent fish deaths. Land application can be an attractive alternative to aeration. Impacts on human health: nitrate accumulation in drinking water Leakage of Nr (reactive nitrogen) from human activities can cause nitrate accumulation in the natural water environment, which can create harmful impacts on human health. Excessive use of N-fertilizer in agriculture has been a significant source of nitrate pollution in groundwater and surface water. Due to its high solubility and low retention by soil, nitrate can easily escape from the subsoil layer to the groundwater, causing nitrate pollution. Some other non-point sources for nitrate pollution in groundwater originate from livestock feeding, animal and human contamination, and municipal and industrial waste. Since groundwater often serves as the primary domestic water supply, nitrate pollution can be extended from groundwater to surface and drinking water during potable water production, especially for small community water supplies, where poorly regulated and unsanitary waters are used. The WHO standard for drinking water is 50 mg L−1 for short-term exposure, and for 3 mg L−1 chronic effects. Once it enters the human body, nitrate can react with organic compounds through nitrosation reactions in the stomach to form nitrosamines and nitrosamides, which are involved in some types of cancers (e.g., oral cancer and gastric cancer). Impacts on human health: air quality Human activities have also dramatically altered the global nitrogen cycle by producing nitrogenous gases associated with global atmospheric nitrogen pollution. There are multiple sources of atmospheric reactive nitrogen (Nr) fluxes. Agricultural sources of reactive nitrogen can produce atmospheric emission of ammonia (), nitrogen oxides () and nitrous oxide (). Combustion processes in energy production, transportation, and industry can also form new reactive nitrogen via the emission of , an unintentional waste product. When those reactive nitrogens are released into the lower atmosphere, they can induce the formation of smog, particulate matter, and aerosols, all of which are major contributors to adverse health effects on human health from air pollution. In the atmosphere, can be oxidized to nitric acid (), and it can further react with to form ammonium nitrate (), which facilitates the formation of particulate nitrate. Moreover, can react with other acid gases (sulfuric and hydrochloric acids) to form ammonium-containing particles, which are the precursors for the secondary organic aerosol particles in photochemical smog. See also References Cycle Biogeochemical cycle Soil biology Metabolism Biogeography
Nitrogen cycle
[ "Chemistry", "Biology" ]
4,787
[ "Biogeography", "Biogeochemical cycle", "Nitrogen cycle", "Biogeochemistry", "Soil biology", "Cellular processes", "Biochemistry", "Metabolism" ]
59,438
https://en.wikipedia.org/wiki/Thermal%20conductivity%20and%20resistivity
The thermal conductivity of a material is a measure of its ability to conduct heat. It is commonly denoted by , , or and is measured in W·m−1·K−1. Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals typically have high thermal conductivity and are very efficient at conducting heat, while the opposite is true for insulating materials such as mineral wool or Styrofoam. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications, and materials of low thermal conductivity are used as thermal insulation. The reciprocal of thermal conductivity is called thermal resistivity. The defining equation for thermal conductivity is , where is the heat flux, is the thermal conductivity, and is the temperature gradient. This is known as Fourier's law for heat conduction. Although commonly expressed as a scalar, the most general form of thermal conductivity is a second-rank tensor. However, the tensorial description only becomes necessary in materials which are anisotropic. Definition Simple definition Consider a solid material placed between two environments of different temperatures. Let be the temperature at and be the temperature at , and suppose . An example of this scenario is a building on a cold winter day; the solid material in this case is the building wall, separating the cold outdoor environment from the warm indoor environment. According to the second law of thermodynamics, heat will flow from the hot environment to the cold one as the temperature difference is equalized by diffusion. This is quantified in terms of a heat flux , which gives the rate, per unit area, at which heat flows in a given direction (in this case minus x-direction). In many materials, is observed to be directly proportional to the temperature difference and inversely proportional to the separation distance : The constant of proportionality is the thermal conductivity; it is a physical property of the material. In the present scenario, since heat flows in the minus x-direction and is negative, which in turn means that . In general, is always defined to be positive. The same definition of can also be extended to gases and liquids, provided other modes of energy transport, such as convection and radiation, are eliminated or accounted for. The preceding derivation assumes that the does not change significantly as temperature is varied from to . Cases in which the temperature variation of is non-negligible must be addressed using the more general definition of discussed below. General definition Thermal conduction is defined as the transport of energy due to random molecular motion across a temperature gradient. It is distinguished from energy transport by convection and molecular work in that it does not involve macroscopic flows or work-performing internal stresses. Energy flow due to thermal conduction is classified as heat and is quantified by the vector , which gives the heat flux at position and time . According to the second law of thermodynamics, heat flows from high to low temperature. Hence, it is reasonable to postulate that is proportional to the gradient of the temperature field , i.e. where the constant of proportionality, , is the thermal conductivity. This is called Fourier's law of heat conduction. Despite its name, it is not a law but a definition of thermal conductivity in terms of the independent physical quantities and . As such, its usefulness depends on the ability to determine for a given material under given conditions. The constant itself usually depends on and thereby implicitly on space and time. An explicit space and time dependence could also occur if the material is inhomogeneous or changing with time. In some solids, thermal conduction is anisotropic, i.e. the heat flux is not always parallel to the temperature gradient. To account for such behavior, a tensorial form of Fourier's law must be used: where is symmetric, second-rank tensor called the thermal conductivity tensor. An implicit assumption in the above description is the presence of local thermodynamic equilibrium, which allows one to define a temperature field . This assumption could be violated in systems that are unable to attain local equilibrium, as might happen in the presence of strong nonequilibrium driving or long-ranged interactions. Other quantities In engineering practice, it is common to work in terms of quantities which are derivative to thermal conductivity and implicitly take into account design-specific features such as component dimensions. For instance, thermal conductance is defined as the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity , area and thickness , the conductance is , measured in W⋅K−1. The relationship between thermal conductivity and conductance is analogous to the relationship between electrical conductivity and electrical conductance. Thermal resistance is the inverse of thermal conductance. It is a convenient measure to use in multicomponent design since thermal resistances are additive when occurring in series. There is also a measure known as the heat transfer coefficient: the quantity of heat that passes per unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. In ASTM C168-15, this area-independent quantity is referred to as the "thermal conductance". The reciprocal of the heat transfer coefficient is thermal insulance. In summary, for a plate of thermal conductivity , area and thickness , thermal conductance = , measured in W⋅K−1. thermal resistance = , measured in K⋅W−1. heat transfer coefficient = , measured in W⋅K−1⋅m−2. thermal insulance = , measured in K⋅m2⋅W−1. The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow. An additional term, thermal transmittance, quantifies the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance. The term U-value is also used. Finally, thermal diffusivity combines thermal conductivity with density and specific heat: . As such, it quantifies the thermal inertia of a material, i.e. the relative difficulty in heating a material to a given temperature using heat sources applied at the boundary. Units In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin (W/(m⋅K)). Some papers report in watts per centimeter-kelvin [W/(cm⋅K)]. However, physicists use other convenient units as well, e.g., in cgs units, where esu/(cm-sec-K) is used. The Lorentz number, defined as L=κ/σT is a quantity independent of the carrier density and the scattering mechanism. Its value for a gas of non-interacting electrons (typical carriers in good metallic conductors) is 2.72×10−13 esu/K2, or equivalently, 2.44×10−8 Watt-Ohm/K2. In imperial units, thermal conductivity is measured in BTU/(h⋅ft⋅°F). The dimension of thermal conductivity is M1L1T−3Θ−1, expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ). Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of measures such as the R-value (resistance) and the U-value (transmittance or conductance). Although related to the thermal conductivity of a material used in an insulation product or assembly, R- and U-values are measured per unit area, and depend on the specified thickness of the product or assembly. Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry. Measurement There are several ways to measure thermal conductivity; each is suitable for a limited range of materials. Broadly speaking, there are two categories of measurement techniques: steady-state and transient. Steady-state techniques infer the thermal conductivity from measurements on the state of a material once a steady-state temperature profile has been reached, whereas transient techniques operate on the instantaneous state of a system during the approach to steady state. Lacking an explicit time component, steady-state techniques do not require complicated signal analysis (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed, and the time required to reach steady state precludes rapid measurement. In comparison with solid materials, the thermal properties of fluids are more difficult to study experimentally. This is because in addition to thermal conduction, convective and radiative energy transport are usually present unless measures are taken to limit these processes. The formation of an insulating boundary layer can also result in an apparent reduction in the thermal conductivity. Experimental values The thermal conductivities of common substances span at least four orders of magnitude. Gases generally have low thermal conductivity, and pure metals have high thermal conductivity. For example, under standard conditions the thermal conductivity of copper is over times that of air. Of all materials, allotropes of carbon, such as graphite and diamond, are usually credited with having the highest thermal conductivities at room temperature. The thermal conductivity of natural diamond at room temperature is several times higher than that of a highly conductive metal such as copper (although the precise value varies depending on the diamond type). Thermal conductivities of selected substances are tabulated below; an expanded list can be found in the list of thermal conductivities. These values are illustrative estimates only, as they do not account for measurement uncertainties or variability in material definitions. Influencing factors Temperature The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law, thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply. In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K. On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations (phonons). Except for high-quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature, thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects. Chemical phase When a material undergoes a phase change (e.g. from solid to liquid), the thermal conductivity may change abruptly. For instance, when ice melts to form liquid water at 0 °C, the thermal conductivity changes from 2.18 W/(m⋅K) to 0.56 W/(m⋅K). Even more dramatically, the thermal conductivity of a fluid diverges in the vicinity of the vapor-liquid critical point. Thermal anisotropy Some substances, such as non-cubic crystals, can exhibit different thermal conductivities along different crystal axes. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the c axis and 32 W/(m⋅K) along the a axis. Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing, laminated materials, cables, the materials used for the Space Shuttle thermal protection system, and fiber-reinforced composite structures. When anisotropy is present, the direction of heat flow may differ from the direction of the thermal gradient. Electrical conductivity In metals, thermal conductivity is approximately correlated with electrical conductivity according to the Wiedemann–Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator but conducts heat via phonons due to its orderly array of atoms. Magnetic field The influence of magnetic fields on thermal conductivity is known as the thermal Hall effect or Righi–Leduc effect. Gaseous phases In the absence of convection, air and other gases are good insulators. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which obstruct heat conduction pathways. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel, as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by trapping air in pores, pockets, or voids. Low density gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride, a dense gas, has a relatively high thermal conductivity due to its high heat capacity. Argon and krypton, gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics. The thermal conductivity through bulk materials in porous or granular form is governed by the type of gas in the gaseous phase, and its pressure. At low pressures, the thermal conductivity of a gaseous phase is reduced, with this behaviour governed by the Knudsen number, defined as , where is the mean free path of gas molecules and is the typical gap size of the space filled by the gas. In a granular material corresponds to the characteristic size of the gaseous phase in the pores or intergranular spaces. Isotopic purity The thermal conductivity of a crystal can depend strongly on isotopic purity, assuming other lattice defects are negligible. A notable example is diamond: at a temperature of around 100 K the thermal conductivity increases from 10,000 W·m−1·K−1 for natural type IIa diamond (98.9% 12C), to 41,000 for 99.9% enriched synthetic diamond. A value of 200,000 is predicted for 99.999% 12C at 80 K, assuming an otherwise pure crystal. The thermal conductivity of 99% isotopically enriched cubic boron nitride is ~ 1400 W·m−1·K−1, which is 90% higher than that of natural boron nitride. Molecular origins The molecular mechanisms of thermal conduction vary among different materials, and in general depend on details of the microscopic structure and molecular interactions. As such, thermal conductivity is difficult to predict from first-principles. Any expressions for thermal conductivity which are exact and general, e.g. the Green-Kubo relations, are difficult to apply in practice, typically consisting of averages over multiparticle correlation functions. A notable exception is a monatomic dilute gas, for which a well-developed theory exists expressing thermal conductivity accurately and explicitly in terms of molecular parameters. In a gas, thermal conduction is mediated by discrete molecular collisions. In a simplified picture of a solid, thermal conduction occurs by two mechanisms: 1) the migration of free electrons and 2) lattice vibrations (phonons). The first mechanism dominates in pure metals and the second in non-metallic solids. In liquids, by contrast, the precise microscopic mechanisms of thermal conduction are poorly understood. Gases In a simplified model of a dilute monatomic gas, molecules are modeled as rigid spheres which are in constant motion, colliding elastically with each other and with the walls of their container. Consider such a gas at temperature and with density , specific heat and molecular mass . Under these assumptions, an elementary calculation yields for the thermal conductivity where is a numerical constant of order , is the Boltzmann constant, and is the mean free path, which measures the average distance a molecule travels between collisions. Since is inversely proportional to density, this equation predicts that thermal conductivity is independent of density for fixed temperature. The explanation is that increasing density increases the number of molecules which carry energy but decreases the average distance a molecule can travel before transferring its energy to a different molecule: these two effects cancel out. For most gases, this prediction agrees well with experiments at pressures up to about 10 atmospheres. At higher densities, the simplifying assumption that energy is only transported by the translational motion of particles no longer holds, and the theory must be modified to account for the transfer of energy across a finite distance at the moment of collision between particles, as well as the locally non-uniform density in a high density gas. This modification has been carried out, yielding Revised Enskog Theory, which predicts a density dependence of the thermal conductivity in dense gases. Typically, experiments show a more rapid increase with temperature than (here, is independent of ). This failure of the elementary theory can be traced to the oversimplified "hard sphere" model, which both ignores the "softness" of real molecules, and the attractive forces present between real molecules, such as dispersion forces. To incorporate more complex interparticle interactions, a systematic approach is necessary. One such approach is provided by Chapman–Enskog theory, which derives explicit expressions for thermal conductivity starting from the Boltzmann equation. The Boltzmann equation, in turn, provides a statistical description of a dilute gas for generic interparticle interactions. For a monatomic gas, expressions for derived in this way take the form where is an effective particle diameter and is a function of temperature whose explicit form depends on the interparticle interaction law. For rigid elastic spheres, is independent of and very close to . More complex interaction laws introduce a weak temperature dependence. The precise nature of the dependence is not always easy to discern, however, as is defined as a multi-dimensional integral which may not be expressible in terms of elementary functions, but must be evaluated numerically. However, for particles interacting through a Mie potential (a generalisation of the Lennard-Jones potential) highly accurate correlations for in terms of reduced units have been developed. An alternate, equivalent way to present the result is in terms of the gas viscosity , which can also be calculated in the Chapman–Enskog approach: where is a numerical factor which in general depends on the molecular model. For smooth spherically symmetric molecules, however, is very close to , not deviating by more than for a variety of interparticle force laws. Since , , and are each well-defined physical quantities which can be measured independent of each other, this expression provides a convenient test of the theory. For monatomic gases, such as the noble gases, the agreement with experiment is fairly good. For gases whose molecules are not spherically symmetric, the expression still holds. In contrast with spherically symmetric molecules, however, varies significantly depending on the particular form of the interparticle interactions: this is a result of the energy exchanges between the internal and translational degrees of freedom of the molecules. An explicit treatment of this effect is difficult in the Chapman–Enskog approach. Alternately, the approximate expression was suggested by Eucken, where is the heat capacity ratio of the gas. The entirety of this section assumes the mean free path is small compared with macroscopic (system) dimensions. In extremely dilute gases this assumption fails, and thermal conduction is described instead by an apparent thermal conductivity which decreases with density. Ultimately, as the density goes to the system approaches a vacuum, and thermal conduction ceases entirely. Liquids The exact mechanisms of thermal conduction are poorly understood in liquids: there is no molecular picture which is both simple and accurate. An example of a simple but very rough theory is that of Bridgman, in which a liquid is ascribed a local molecular structure similar to that of a solid, i.e. with molecules located approximately on a lattice. Elementary calculations then lead to the expression where is the Avogadro constant, is the volume of a mole of liquid, and is the speed of sound in the liquid. This is commonly called Bridgman's equation. Metals For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity c, which, in this case, is proportional to T. So with k0 a constant. For pure metals, k0 is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so l and, consequently k, are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation. Lattice waves, phonons, in dielectric solids Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (i.e., phonons). This transport mechanism is theorized to be limited by the elastic scattering of acoustic phonons at lattice defects. This has been confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were found to be limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm. The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If Vg is the group velocity of a phonon wave packet, then the relaxation length is defined as: where t is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, Vlong is much greater than Vtrans, and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons. Regarding the dependence of wave velocity on wavelength or frequency (dispersion), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering. This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering. Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λL (L) is small. Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3pq when p is the number of primitive cells with q atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3p(q − 1) are accommodated through the optical branches. This implies that structures with larger p and q contain a greater number of optical modes and a reduced λL. From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly. Describing anharmonic effects is complicated because an exact treatment as in the harmonic case is not possible, and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity. Only when the phonon number ‹n› deviates from the equilibrium value ‹n›0, can a thermal current arise as stated in the following expression where v is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹n› in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time (τ) approximation which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity λL can be determined. The temperature dependence for λL originates from the variety of processes, whose significance for λL depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for λL, as stated in the following equation where Λ is the mean free path for phonon and denotes the heat capacity. This equation is a result of combining the four previous equations with each other and knowing that for cubic or isotropic systems and . At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λL is determined by the specific heat and is therefore proportional to T3. Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K < T < Θ), the conservation of energy and quasimomentum , where q1 is wave vector of the incident phonon and q2, q3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport. Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large q-vectors are excited, because unless the sum of q2 and q3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy E is given by the Boltzmann distribution . To U-process to occur the decaying phonon to have a wave vector q1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved. Therefore, these phonons have to possess energy of , which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to , with . Temperature dependence of the mean free path has an exponential form . The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λL, as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance. At high temperatures (T > Θ), the mean free path and therefore λL has a temperature dependence T−1, to which one arrives from formula by making the following approximation and writing . This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur. Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids. Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles or structures. Prediction Because thermal conductivity depends continuously on quantities like temperature and material composition, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available under the physical conditions of interest. This capability is important in thermophysical simulations, where quantities like temperature and pressure vary continuously with space and time, and may encompass extreme conditions inaccessible to direct measurement. In fluids For the simplest fluids, such as monatomic gases and their mixtures at low to moderate densities, ab initio quantum mechanical computations can accurately predict thermal conductivity in terms of fundamental atomic properties—that is, without reference to existing measurements of thermal conductivity or other transport properties. This method uses Chapman-Enskog theory or Revised Enskog Theory to evaluate the thermal conductivity, taking fundamental intermolecular potentials as input, which are computed ab initio from a quantum mechanical description. For most fluids, such high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing thermal conductivity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures and pressures, then it is called a "reference correlation" for that material. Reference correlations have been published for many pure materials; examples are carbon dioxide, ammonia, and benzene. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases. Thermophysical modeling software often relies on reference correlations for predicting thermal conductivity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp (open-source). Thermal conductivity can also be computed using the Green-Kubo relations, which express transport coefficients in terms of the statistics of molecular trajectories. The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics. An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules. History Jan Ingenhousz and the thermal conductivity of different metals In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities: See also Copper in heat exchangers Heat pump Heat transfer Heat transfer mechanisms Insulated pipe Interfacial thermal resistance Laser flash analysis List of thermal conductivities Phase-change material R-value (insulation) Specific heat capacity Thermal bridge Thermal conductance quantum Thermal contact conductance Thermal diffusivity Thermal effusivity Thermal entrance length Thermal interface material Thermal diode Thermal resistance Thermistor Thermocouple Thermodynamics Thermal conductivity measurement Refractory metals References Notes Citations Sources Further reading Undergraduate-level texts (engineering) . A standard, modern reference. Undergraduate-level texts (physics) Halliday, David; Resnick, Robert; & Walker, Jearl (1997). Fundamentals of Physics (5th ed.). John Wiley and Sons, New York . An elementary treatment. . A brief, intermediate-level treatment. . An advanced treatment. Graduate-level texts . A very advanced but classic text on the theory of transport processes in gases. Reid, C. R., Prausnitz, J. M., Poling B. E., Properties of gases and liquids, IV edition, Mc Graw-Hill, 1987 Srivastava G. P (1990), The Physics of Phonons. Adam Hilger, IOP Publishing Ltd, Bristol External links Thermopedia THERMAL CONDUCTIVITY Contribution of Interionic Forces to the Thermal Conductivity of Dilute Electrolyte Solutions The Journal of Chemical Physics 41, 3924 (1964) The importance of Soil Thermal Conductivity for power companies Thermal Conductivity of Gas Mixtures in Chemical Equilibrium. II The Journal of Chemical Physics 32, 1005 (1960) Heat conduction Heat transfer Physical quantities Thermodynamic properties
Thermal conductivity and resistivity
[ "Physics", "Chemistry", "Mathematics" ]
7,090
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamic properties", "Physical quantities", "Quantity", "Thermodynamics", "Heat conduction", "Physical properties" ]
59,441
https://en.wikipedia.org/wiki/DLL%20hell
DLL hell is an umbrella term for the complications that arise when one works with dynamic-link libraries (DLLs) used with older Microsoft Windows operating systems, particularly legacy 16-bit editions, which all run in a single memory space. DLL hell can appear in many different ways, wherein affected programs may fail to run correctly, if at all. It is the Windows ecosystem-specific form of the general concept dependency hell. Problems DLLs are Microsoft's implementation of shared libraries. Shared libraries allow common code to be bundled into a wrapper, the DLL, which is used by any application software on the system without loading multiple copies into memory. A simple example might be the GUI text editor, which is widely used by many programs. By placing this code in a DLL, all the applications on the system can use it without using more memory. This contrasts with static libraries, which are functionally similar but copy the code directly into the application. In this case, every application grows by the size of all the libraries it uses, and this can be quite large for modern programs. The problem arises when the version of the DLL on the computer is different than the version that was used when the program was being created. DLLs have no built-in mechanism for backward compatibility, and even minor changes to the DLL can render its internal structure so different from previous versions that attempting to use them will generally cause the application to crash. Static libraries avoid this problem because the version that was used to build the application is included inside it, so even if a newer version exists elsewhere on the system, this does not affect the application. A key reason for the version incompatibility is the structure of the DLL file. The file contains a directory of the individual methods (procedures, routines, etc.) contained within the DLL and the types of data they take and return. Even minor changes to the DLL code can cause this directory to be re-arranged, in which case an application that calls a particular method believing it to be the 4th item in the directory might end up calling an entirely different and incompatible routine, which would normally cause the application to crash. There are several problems commonly encountered with DLLs, especially after numerous applications have been installed and uninstalled on a system. The difficulties include conflicts between DLL versions, difficulty in obtaining required DLLs, and having many unnecessary DLL copies. Solutions to these problems were known even while Microsoft was writing the DLL system. These have been incorporated into the .NET replacement, "Assemblies". Incompatible versions A particular version of a library can be compatible with some programs that use it and incompatible with others. Windows has been particularly vulnerable to this because of its emphasis on dynamic linking of C++ libraries and Object Linking and Embedding (OLE) objects. C++ classes export many methods, and a single change to the class, such as a new virtual method, can make it incompatible with programs that were built against an earlier version. Object Linking and Embedding has very strict rules to prevent this: interfaces are required to be stable, and memory managers are not shared. This is insufficient, however, because the semantics of a class can change. A bug fix for one application may result in the removal of a feature from another. Before Windows 2000, Windows was vulnerable to this because the COM class table was shared across all users and processes. Only one COM object in one DLL/EXE could be declared as having a specific global COM Class ID on a system. If any program needed to create an instance of that class, it got whatever was the current centrally registered implementation. As a result, an installation of a program that installed a new version of a common object might inadvertently break other programs that were previously installed. DLL stomping A common and troublesome problem occurs when a newly installed program overwrites a working system DLL with an earlier, incompatible version. Early examples of this were the ctl3d.dll and ctl3dv2.dll libraries for Windows 3.1: Microsoft-created libraries that third-party publishers would distribute with their software, but each distributing the version they developed with rather than the most recent version. DLL stomping occurs because: Microsoft in the past distributed runtime DLLs as shared system components (originally C:\WINDOWS and C:\WINDOWS\SYSTEM), as a way of efficiently sharing code in a shared-memory OS with limited RAM and disk space. Consequently, third-party developers also distributed these in such a manner. Application installers are typically executed in a privileged security context that has access to install DLLs into the system directories and to edit the system registry to register new DLLs as COM objects. A poorly written or misconfigured installer can therefore downgrade a system library on legacy versions of Windows, on which Windows File Protection or Windows Resource Protection does not roll back the change. On Windows Vista and later, only the "trusted installer" account can make changes to core operating-system libraries. Windows applications were permitted to include OS updates in their own installation programs. That is, many Microsoft DLLs are redistributable, meaning that the applications can include them if they need the services of the particular libraries. Before Windows Installer, Windows installers historically were commercial products; many people attempted to write their own installers, overlooking or mishandling versioning problems in the process. Some development environments did not automatically add a version resource in their compiled libraries, so many developers overlooked this aspect. Checking file dates, overwriting existing files or skipping the copy operation if the DLL was already installed were the only options available instead of correct versioning. Sometimes, the OS itself removed or replaced DLLs with older or obsolete versions. For example, Windows 2000 would install black-and-white printer DLLs on top of color-aware DLLs, if a black-and-white printer was installed after the color printer. Incorrect COM registration In COM and other parts of Windows, prior to the introduction of side-by-side registry-free assemblies, the Registry was used for determining which underlying DLL to use. If a different version of a module was registered, this DLL would be loaded instead of the expected one. This scenario could be caused by conflicting installations that register different versions of the same libraries, in which case the last installation would prevail. Shared in-memory modules 16-bit versions of Windows (and Windows on Windows) load only one instance of any given DLL; all applications reference the same in-memory copy, until no applications are using it and it is unloaded from memory. (For 32-bit and 64-bit versions of Windows, inter-process sharing occurs only where different executables load a module from exactly the same directory; the code but not the stack is shared between processes through a process called "memory mapping".) Thus, even when the desired DLL is located in a directory where it can be expected to be found, such as in the system directory or the application directory, neither of these instances will be used if another application has started with an incompatible version from a third directory. This issue can manifest itself as a 16-bit application error that occurs only when applications are started in a specific order. Lack of serviceability In direct conflict with the DLL stomping problem: If updates to a DLL do not affect all applications that use it, then it becomes much harder to "service" the DLL – that is, to eliminate problems that exist in the current versions of the DLL. (Security fixes are a particularly compelling and painful case.) Instead of fixing just the latest version of the DLL, the implementor must ideally make their fixes and test them for compatibility on every released version of the DLL. Causes DLL incompatibility has been caused by: Memory constraints, combined with lack of separation of process memory space in 16-bit versions of Windows; Lack of enforced standard versioning, naming, and file-system location schemata for DLLs; Lack of an enforced standard method for software installation and removing (package management); Lack of centralized authoritative support for DLL application binary interface management and safeguards, allowing incompatible DLLs with the same file name and internal version numbers to be released; Oversimplified management tools, preventing the identification of changed or problematic DLLs by users and administrators; Developers breaking backward compatibility of functions in shared modules; Microsoft releasing out-of-band updates to operating-system runtime components; Inability of earlier versions of Windows to run side-by-side conflicting versions of the same library; Reliance on the current directory or %PATH% environment variable, both of which vary over time and from system to system, to find dependent DLLs (instead of loading them from an explicitly configured directory); Developers re-using the ClassIDs from sample applications for the COM interfaces of their applications, rather than generating their own new GUIDs. DLL hell was a very common phenomenon on pre-Windows NT versions of Microsoft operating systems, the primary cause being that the 16-bit operating systems did not restrict processes to their own memory space, thereby not allowing them to load their own version of a shared module that they were compatible with. Application installers were expected to be good citizens and verify DLL version information before overwriting the existing system DLLs. Standard tools to simplify application deployment (which always involves shipping the dependent operating-system DLLs) were provided by Microsoft and other 3rd-party tools vendors. Microsoft even required application vendors to use a standard installer and have their installation program certified to work correctly, before being granted use of the Microsoft logo. The good-citizen installer approach did not mitigate the problem, as the rise in popularity of the Internet provided more opportunities to obtain non-conforming applications. Use by malware Windows searches several locations for ambiguously DLLs, i.e. ones not fully qualified. Malwares can exploit this behavior in several ways collectively known as DLL search order hijacking. One method is DLL preloading or a binary planting attack. It places DLL files with the same name in a location that is searched earlier, such as the current working directory. When the vulnerable program tries to load the DLL, the malicious version is executed, possibly at high privilege levels if the program runs at that level. Another method is relative path DLL hijacking, which moves the vulnerable program to a location together with the malicious DLL. The DLL is loaded because the application's directory is searched early. According to CrowdStrike, this method is the most common. DLL sideloading delivers both the legitimate program and malicious library. It may avoid detection because the execution seems as running a reputable program. Other methods include phantom DLL hijacking, where a malicious DLL file is created against references to a non-existent library, and changing registry values to abuse DLL redirection, which changes the DLL search order. DLL hijacking was used by state-sponsored groups including Lazarus Group and Tropic Trooper. Solutions Various forms of DLL hell have been solved or mitigated over the years. Static linking A simple solution to DLL hell in an application is to statically link all the libraries, i.e. to include the library version required in the program, instead of picking up a system library with a specified name. This is common in C/C++ applications, where, instead of having to worry about which version of MFC42.DLL is installed, the application is compiled to be statically linked against the same libraries. This eliminates the DLLs entirely and is possible in standalone applications using only libraries that offer a static option, as Microsoft Foundation Class Library does. However, the main purpose of DLLs – runtime library sharing between programs to reduce memory overhead – is sacrificed; duplicating library code in several programs creates software bloat and complicates the deployment of security fixes or newer versions of dependent software. Windows File Protection The DLL overwriting problem (referred to as DLL Stomping by Microsoft) was somewhat reduced with Windows File Protection (WFP), which was introduced in Windows 2000. This prevents unauthorized applications from overwriting system DLLs, unless they use the specific Windows APIs that permit this. There may still be a risk that updates from Microsoft are incompatible with existing applications, but this risk is typically reduced in current versions of Windows through the use of side-by-side assemblies. Third-party applications cannot stomp on OS files unless they bundle legitimate Windows updates with their installer, or if they disable the Windows File Protection service during installation, and on Windows Vista or later also take ownership of system files and grant themselves access. The SFC utility could revert these changes at any time. Running conflicting DLLs simultaneously The solutions here consist of having different copies of the same DLLs for each application, both on disk and in memory. An easy manual solution to conflicts was placing the different versions of the problem DLL into the applications' folders, rather than a common system-wide folder. This works in general as long as the application is 32-bit or 64-bit, and that the DLL does not use shared memory. In the case of 16-bit applications, the two applications cannot be executed simultaneously on a 16-bit platform, or in the same 16-bit virtual machine under a 32-bit operating system. OLE prevented this before Windows 98 SE/2000, because earlier versions of Windows had a single registry of COM objects for all applications. Windows 98 SE/2000 introduced a solution called side-by-side assembly, which loads separate copies of DLLs for each application that requires them (and thus allows applications that require conflicting DLLs to run simultaneously). This approach eliminates conflicts by allowing applications to load unique versions of a module into their address space, while preserving the primary benefit of sharing DLLs between applications (i.e. reducing memory use) by using memory mapping techniques to share common code between different processes that do still use the same module. Yet DLLs using shared data between multiple processes cannot take this approach. One negative side effect is that orphaned instances of DLLs may not be updated during automated processes. Portable applications Depending on the application architecture and runtime environment, portable applications may be an effective way to reduce some DLL problems, since every program bundles its own private copies of any DLLs it requires. The mechanism relies on applications not fully qualifying the paths to dependent DLLs when loading them, and the operating system searching the executable directory before any shared location. However this technique can also be exploited by malware, and the increased flexibility may also come at the expense of security if the private DLLs are not kept up to date with security patches in the same way that the shared ones are. Application virtualization can also allow applications to run in a "bubble", which avoids installing DLL files directly into the operating system. Other countermeasures There are other countermeasures to avoid DLL hell, some of which may have to be used simultaneously; some other features that help to mitigate the problem are: Installation tools are now bundled into Microsoft Visual Studio, one of the main environments for Windows development. These tools perform version checking before DLL installation, and can include predefined installation packages in a .MSI installation. This allows third party applications to integrate OS component updates without having to write their own installers for these components. System Restore can recover a system from a bad installation, including registry damage. Although this does not prevent the problem, it facilitates recovery therefrom. WinSxS (Windows Side-by-Side) directory, which allows multiple versions of the same libraries to co-exist. Run 16-bit applications in a separate memory space under a 32-bit version of Windows to allow two applications to use conflicting versions of the same DLL at the same time. Use a version of Windows that includes Windows File Protection. Windows Me and Windows 2000, both released in 2000, support this form of system file protection, as do Windows XP and Windows Server 2003. Its replacement, Windows Resource Protection, was introduced in Windows Vista and Windows Server 2008, and uses a different method of protecting system files from being changed. Registration-free COM: Windows XP introduced a new mode of COM object registration called "Registration-free COM". This feature makes it possible for applications that need to install COM objects to store all the required COM registry information in the application's own directory, instead of in the global system registry. Thus, it provides a mechanism for multiple versions of the same DLL to be registered at the same time by multiple applications (Microsoft calls this "Side-by-Side Assembly"). DLL hell can be substantially avoided using Registration-free COM, the only limitation being it requires at least Windows XP or later Windows versions and that it must not be used for EXE COM servers or system-wide components such as MDAC, MSXML, DirectX or Internet Explorer. Shipping the operating system with a capable package management system that is able to track the DLL dependencies, encouraging the use of the package manager and discouraging manual installation of DLLs. Windows Installer, included with Windows Me, Windows 2000 and all later versions provides this functionality. Having a central database or authority for DLL conflict resolution and software distribution. Changes to a library can be submitted to this authority; thus, it can make sure compatibility is preserved in the developed branches. If some older software is incompatible with the current library, the authority can provide a compatibility interface for it, or bundle the old version as a distinct package. If software developers need to customize a library, and if the main library release is unlikely to incorporate the changes that they need, they can ship the customized DLL for the program's private use (commonly by placing it in the program's private directory) or statically link the program against the customized library. While DLLs are best for modularizing applications and the system's components and as third-party libraries, their usage is not imperative in all cases on modern systems where memory is no longer a constraint. For example, if an application needs a library that will not be used anywhere else, it can be linked statically, with no space penalty and with a speed gain. Windows Vista and later use a special TrustedInstaller service to install operating system files. Other user accounts, including the SYSTEM, have no access to overwrite core system binaries. Windows 7 expands this functionality to some critical parts of the Registry. See also Dependency hell Extension conflict Portable application Portable application creators JAR hell References External links Getting Out of DLL Hell on Microsoft TechNet Simplifying Deployment and Solving DLL Hell with the .NET Framework on MSDN Avoiding DLL Hell: Introducing Application Metadata in the Microsoft .NET Framework by Matt Pietrek Dr. Dobb's on DLL hell (details on LoadLibraryEx) Joel on Software discussion Article on DLL hell Computer libraries Windows administration Computer jargon
DLL hell
[ "Technology" ]
3,916
[ "Computing terminology", "IT infrastructure", "Computer jargon", "Computer libraries", "Natural language and computing" ]
59,442
https://en.wikipedia.org/wiki/Baryte
Baryte, barite or barytes ( or ) is a mineral consisting of barium sulfate (BaSO4). Baryte is generally white or colorless, and is the main source of the element barium. The baryte group consists of baryte, celestine (strontium sulfate), anglesite (lead sulfate), and anhydrite (calcium sulfate). Baryte and celestine form a solid solution . Names and history The radiating form, sometimes referred to as Bologna Stone, attained some notoriety among alchemists for specimens found in the 17th century near Bologna by Vincenzo Casciarolo. These became phosphorescent upon being calcined. Carl Scheele determined that baryte contained a new element in 1774, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England. The American Petroleum Institute specification API 13/ISO 13500, which governs baryte for drilling purposes, does not refer to any specific mineral, but rather a material that meets that specification. In practice, however, this is usually the mineral baryte. The term "primary barytes" refers to the first marketable product, which includes crude baryte (run of mine) and the products of simple beneficiation methods, such as washing, jigging, heavy media separation, tabling, and flotation. Most crude baryte requires some upgrading to minimum purity or density. Baryte that is used as an aggregate in a "heavy" cement is crushed and screened to a uniform size. Most baryte is ground to a small, uniform size before it is used as a filler or extender, an addition to industrial products, in the production of barium chemicals or as a weighting agent in petroleum well drilling mud. Name The name baryte is derived from the , 'heavy'. The American spelling is barite. The International Mineralogical Association initially adopted "barite" as the official spelling, but recommended adopting the older "baryte" spelling later. This move was controversial and was notably ignored by American mineralogists. Other names have been used for baryte, including barytine, barytite, barytes, heavy spar, tiff, and blanc fixe. Mineral associations and locations Baryte occurs in many depositional environments, and is deposited through many processes including biogenic, hydrothermal, and evaporation, among others. Baryte commonly occurs in lead-zinc veins in limestones, in hot spring deposits, and with hematite ore. It is often associated with the minerals anglesite and celestine. It has also been identified in meteorites. Baryte has been found at locations in Australia, Brazil, Nigeria, Canada, Chile, China, India, Pakistan, Germany, Greece, Guatemala, Iran, Ireland (where it was mined on Benbulben), Liberia, Mexico, Morocco, Peru, Romania (Baia Sprie), Turkey, South Africa (Barberton Mountain Land), Thailand, United Kingdom (Cornwall, Cumbria, Dartmoor/Devon, Derbyshire, Durham, Shropshire, Perthshire, Argyllshire, and Surrey) and in the US from Cheshire, Connecticut, De Kalb, New York, and Fort Wallace, New Mexico. It is mined in Arkansas, Connecticut, Virginia, North Carolina, Georgia, Tennessee, Kentucky, Nevada, and Missouri. The global production of baryte in 2019 was estimated to be around 9.5 million metric tons, down from 9.8 million metric tons in 2012. The major barytes producers (in thousand tonnes, data for 2017) are as follows: China (3,600), India (1,600), Morocco (1,000), Mexico (400), United States (330), Iran (280), Turkey (250), Russia (210), Kazakhstan (160), Thailand (130) and Laos (120). The main users of barytes in 2017 were (in million tonnes) US (2.35), China (1.60), Middle East (1.55), the European Union and Norway (0.60), Russia and CIS (0.5), South America (0.35), Africa (0.25), and Canada (0.20). 70% of barytes was destined for oil and gas well drilling muds. 15% for barium chemicals, 14% for filler applications in automotive, construction, and paint industries, and 1% other applications. Natural baryte formed under hydrothermal conditions may be associated with quartz or silica. In hydrothermal vents, the baryte-silica mineralisation can also be accompanied by precious metals. Information about the mineral resource base of baryte ores is presented in some scientific articles. Uses In oil and gas drilling Worldwide, 69–77% of baryte is used as a weighting agent for drilling fluids in oil and gas exploration to suppress high formation pressures and prevent blowouts. As a well is drilled, the bit passes through various formations, each with different characteristics. The deeper the hole, the more baryte is needed as a percentage of the total mud mix. An additional benefit of baryte is that it is non-magnetic and thus does not interfere with magnetic measurements taken in the borehole, either during logging-while-drilling or in separate drill hole logging. Baryte used for drilling petroleum wells can be black, blue, brown or gray depending on the ore body. The baryte is finely ground so that at least 97% of the material, by weight, can pass through a 200-mesh (75 μm) screen, and no more than 30%, by weight, can be less than 6 μm diameter. The ground baryte also must be dense enough so that its specific gravity is 4.2 or greater, soft enough to not damage the bearings of a tricone drill bit, chemically inert, and containing no more than 250 milligrams per kilogram of soluble alkaline salts. In August 2010, the American Petroleum Institute published specifications to modify the 4.2 drilling grade standards for baryte to include 4.1 SG materials. In oxygen and sulfur isotopic analysis In the deep ocean, away from continental sources of sediment, pelagic baryte precipitates and forms a significant amount of the sediments. Since baryte has oxygen, systematics in the δ18O of these sediments have been used to help constrain paleotemperatures for oceanic crust. The variations in sulfur isotopes (34S/32S) are being examined in evaporite minerals containing sulfur (e.g. baryte) and carbonate associated sulfates (CAS) to determine past seawater sulfur concentrations which can help identify specific depositional periods such as anoxic or oxic conditions. The use of sulfur isotope reconstruction is often paired with oxygen when a molecule contains both elements. Geochronological dating Dating the baryte in hydrothermal vents has been one of the major methods to determine their ages. Common methods to date hydrothermal baryte include radiometric dating and electron spin resonance dating. Other uses Baryte is used in added-value applications which include filler in paint and plastics, sound reduction in engine compartments, coat of automobile finishes for smoothness and corrosion resistance, friction products for automobiles and trucks, radiation shielding concrete, glass ceramics, and medical applications (for example, a barium meal before a contrast CT scan). Baryte is supplied in a variety of forms and the price depends on the amount of processing; filler applications commanding higher prices following intense physical processing by grinding and micronising, and there are further premiums for whiteness and brightness and color. It is also used to produce other barium chemicals, notably barium carbonate which is used for the manufacture of LED glass for television and computer screens (historically in cathode-ray tubes); and for dielectrics. Historically, baryte was used for the production of barium hydroxide for sugar refining, and as a white pigment for textiles, paper, and paint. Although baryte contains the toxic alkaline earth metal barium, it is not detrimental for human health, animals, plants and the environment because barium sulfate is extremely insoluble in water. It is also sometimes used as a gemstone. See also Hokutolite Rose rock References Further reading Barium minerals Sulfate minerals Evaporite Gemstones Industrial minerals Luminescent minerals Orthorhombic minerals Baryte group Minerals in space group 62
Baryte
[ "Physics", "Chemistry" ]
1,794
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
59,444
https://en.wikipedia.org/wiki/Energy%20level
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers ( = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it. Explanation Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy. History The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. Atoms Intrinsic energy levels In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number . When the electron is bound to the atom in any closer value of , the electron's energy is lower and is considered negative. Orbital state energy level: atom/ion with nucleus + one electron Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by: (typically between 1 eV and 103 eV), where is the Rydberg constant, is the atomic number, is the principal quantum number, is the Planck constant, and is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number . This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with assuming that the principal quantum number above = in the Rydberg formula and (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. Electron–electron interactions in atoms If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where is substituted with an effective nuclear charge symbolized as that depends strongly on the principal quantum number. In such cases, the orbital types (determined by the azimuthal quantum number ) as well as their levels within the molecule affect and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. Fine structure splitting Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. Hyperfine structure This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. Energy levels due to external fields Zeeman effect There is an interaction energy associated with the magnetic dipole moment, , arising from the electronic orbital angular momentum, , given by with . Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, , arising from the electron spin , with the electron-spin g-factor (about 2), resulting in a total magnetic moment, , . The interaction energy therefore becomes . Stark effect Molecules Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: where is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. Energy level diagrams There are various types of energy level diagrams for bonds between atoms in a molecule. Examples Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams. Energy level transitions Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant () times its frequency () and thus is proportional to its frequency, or inversely to its wavelength (). , since , the speed of light, equals to Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow. An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus. Crystalline materials Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. See also Perturbation theory (quantum mechanics) Atomic clock Computational chemistry References Chemical properties Atomic physics Molecular physics Quantum chemistry Theoretical chemistry Computational chemistry Spectroscopy pl:Powłoka elektronowa
Energy level
[ "Physics", "Chemistry" ]
3,181
[ "Molecular physics", "Spectrum (physical sciences)", "Quantum chemistry", "Instrumental analysis", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", "Atomic physics", " molecular", "nan", "Atomic", "Spectroscopy", " and optical physics" ]
59,446
https://en.wikipedia.org/wiki/Celestine%20%28mineral%29
Celestine (the IMA-accepted name) or celestite is a mineral consisting of strontium sulfate (SrSO). The mineral is named for its occasional delicate blue color. Celestine and the carbonate mineral strontianite are the principal sources of the element strontium, commonly used in fireworks and in various metal alloys. Etymology Celestine derives its name from the Latin word caelestis meaning celestial which in turn is derived from the Latin word caelum meaning sky, air, weather, atmosphere and heaven. Occurrence Celestine occurs as crystals, and also in compact massive and fibrous forms. It is mostly found in sedimentary rocks, often associated with the minerals gypsum, anhydrite, and halite. On occasion in some localities, it may also be found with sulfur inclusions. The mineral is found worldwide, usually in small quantities. Pale blue crystal specimens are found in Madagascar. White and orange variants also occurred at Yate, Bristol, UK, where it was extracted for commercial purposes until April 1991. The skeletons of the protozoan Acantharea are made of celestine, unlike those of other radiolarians which are made of silica. In carbonate marine sediments, burial dissolution is a recognized mechanism of celestine precipitation. It is sometimes used as a gemstone. Geodes Celestine crystals are found in some geodes. The world's largest known geode, a celestine geode in diameter at its widest point, is located near the village of Put-in-Bay, Ohio, on South Bass Island in Lake Erie. The geode has been converted into a viewing cave, Crystal Cave, with the crystals which once composed the floor of the geode removed. The geode has celestine crystals as wide as across, estimated to weigh up to each. Celestine geodes are understood to form by replacement of alabaster nodules consisting of the calcium sulfates gypsum or anhydrite. Calcium sulfate is sparingly soluble, but strontium sulfate is mostly insoluble. Strontium-bearing solutions that come into contact with calcium sulfate nodules dissolve the calcium away, leaving a cavity. The strontium is immediately precipitated as celestine, with the crystals growing into the newly formed cavity. See also List of minerals Footnotes References External links Strontium minerals Sulfate minerals Orthorhombic minerals Minerals in space group 62 Luminescent minerals Evaporite Gemstones Baryte group Minerals described in 1798
Celestine (mineral)
[ "Physics", "Chemistry" ]
526
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
59,450
https://en.wikipedia.org/wiki/Strontianite
Strontianite (SrCO3) is an important raw material for the extraction of strontium. It is a rare carbonate mineral and one of only a few strontium minerals. It is a member of the aragonite group. Aragonite group members: aragonite (CaCO3), witherite (BaCO3), strontianite (SrCO3), cerussite (PbCO3) The ideal formula of strontianite is SrCO3, with molar mass 147.63 g, but calcium (Ca) can substitute for up to 27% of the strontium (Sr) cations, and barium (Ba) up to 3.3%. The mineral was named in 1791 for the locality, Strontian, Argyllshire, Scotland, where the element strontium had been discovered the previous year. Although good mineral specimens of strontianite are rare, strontium is a fairly common element, with abundance in the Earth's crust of 370 parts per million by weight, 87 parts per million by moles, much more common than copper with only 60 parts per million by weight, 19 by moles. Strontium is never found free in nature. The principal strontium ores are celestine SrSO4 and strontianite SrCO3. The main commercial process for strontium metal production is reduction of strontium oxide with aluminium. Unit cell Strontianite is an orthorhombic mineral, belonging to the most symmetrical class in this system, 2/m 2/m 2/m, whose general form is a rhombic dipyramid. The space group is Pmcn. There are four formula units per unit cell (Z = 4) and the unit cell parameters are a = 5.1 Å, b = 8.4 Å, c = 6.0 Å. Structure Strontianite is isostructural with aragonite. When the CO3 group is combined with large divalent cations with ionic radii greater than 1.0 Å, the radius ratios generally do not permit stable 6-fold coordination. For small cations the structure is rhombohedral, but for large cations it is orthorhombic. This is the aragonite structure type with space group Pmcn. In this structure the CO3 groups lie perpendicular to the c axis, in two structural planes, with the CO3 triangular groups of one plane pointing in opposite directions to those of the other. These layers are separated by layers of cations. The CO3 group is slightly non-planar; the carbon atom lies 0.007 Å out of the plane of the oxygen atoms. The groups are tilted such that the angle between a plane drawn through the oxygen atoms and a plane parallel to the a-b unit cell plane is 2°40’. Crystal form Strontianite occurs in several different habits. Crystals are short prismatic parallel to the c axis and often acicular. Calcium-rich varieties often show steep pyramidal forms. Crystals may be pseudo hexagonal due to equal development of different forms. Prism faces are striated horizontally. The mineral also occurs as columnar to fibrous, granular or rounded masses. Optical properties Strontianite is colourless, white, gray, light yellow, green or brown, colourless in transmitted light. It may be longitudinally zoned. It is transparent to translucent, with a vitreous (glassy) lustre, resinous on broken surfaces, and a white streak. It is a biaxial(−) mineral. The direction perpendicular to the plane containing the two optic axes is called the optical direction Y. In strontianite Y is parallel to the b crystal axis. The optical direction Z lies in the plane containing the two optic axes and bisects the acute angle between them. In strontianite Z is parallel to the a crystal axis. The third direction X, perpendicular both to Y and to Z, is parallel to the c crystal axis. The refractive indices are close to nα = 1.52, nβ = 1.66, nγ = 1.67, with different sources quoting slightly different values: nα = 1.520, nβ = 1.667, nγ = 1.669 nα = 1.516 – 1.520, nβ = 1.664 – 1.667, nγ = 1.666 – 1.668 nα = 1.517, nβ = 1.663, nγ = 1.667 (synthetic material) The maximum birefringence δ is 0.15 and the measured value of 2V is 7°, calculated 12° to 8°. If the colour of the incident light is changed, then the refractive indices are modified, and the value of 2V changes. This is known as dispersion of the optic axes. For strontianite the effect is weak, with 2V larger for violet light than for red light r < v. Luminescence Strontianite is almost always fluorescent. It fluoresces bright yellowish white under shortwave, mediumwave and longwave ultraviolet radiation. If the luminescence persists after the ultraviolet source is switched off the sample is said to be phosphorescent. Most strontianite phosphoresces a strong, medium duration, yellowish white after exposure to all three wavelengths. It is also fluorescent and phosphorescent in X-rays and electron beams. All materials will glow red hot if they are heated to a high enough temperature (provided they do not decompose first); some materials become luminescent at much lower temperatures, and this is known as thermoluminescence. Strontianite is sometimes thermoluminescent. Physical properties Cleavage is nearly perfect parallel to one set of prism faces, {110}, and poor on {021}. Traces of cleavage have been observed on {010}. Twinning is very common, with twin plane {110}. The twins are usually contact twins; in a contact twin the two individuals appear to be reflections of each other in the twin plane. Penetration twins of strontainite are rarer; penetration twins are made up of interpenetrating individuals that are related to each other by rotation about a twin axis. Repeated twins are made up of three or more individuals twinned according to the same law. If all the twin planes are parallel then the twin is polysynthetic, otherwise it is cyclic. In strontianite repeated twinning forms cyclic twins with three or four individuals, or polysynthetic twins. The mineral is brittle, and breaks with a subconchoidal to uneven fracture. It is quite soft, with a Mohs hardness of , between calcite and fluorite. The specific gravity of the pure endmember with no calcium substituting for strontium is 3.78, but most samples contain some calcium, which is lighter than strontium, giving a lower specific gravity, in the range 3.74 to 3.78. Substitutions of the heavier ions barium and/or lead increase the specific gravity, although such substitutions are never very abundant. Strontianite is soluble in dilute hydrochloric acid HCl and it is not radioactive. Environment and associations Strontianite is an uncommon low-temperature hydrothermal mineral formed in veins in limestone, marl, and chalk, and in geodes and concretions. It occurs rarely in hydrothermal metallic veins but is common in carbonatites. It most likely crystallises at or near 100 °C. Its occurrence in open vugs and veins suggests crystallisation at very low pressures, probably at most equal to the hydrostatic pressure of the ground water. Under appropriate conditions it alters to celestine SrSO4, and it is itself found as an alteration from celestine. These two minerals are often found in association, together with baryte, calcite, harmotome and sulfur. Occurrences Type locality The type locality is Strontian, North West Highlands (Argyllshire), Scotland, UK. The type material occurred in veins in gneiss. Other UK localities include Brownley Hill Mine (Bloomsberry Horse Level), Nenthead, Alston Moor District, North Pennines, North and Western Region (Cumberland), Cumbria, England, associated with a suite of primary minerals (bournonite, millerite and ullmannite) which are not common in other Mississippi Valley-type deposits. Canada The Francon quarry, Montréal, Québec. Strontianite is very common at the Francon Quarry, in a great variety of habits. It is a late stage mineral, sometimes found as multiple generations. It is found as translucent to opaque, white to pale yellow or beige generally smooth surfaced spheroids, hemispheres and compact spherical and botryoidal aggregates to 10 cm in diameter, and as spheres consisting of numerous radiating acicular crystals, up to 1 cm across. Also as tufts, parallel bundles, and sheaf-like clusters of fibrous to acicular crystals, and as white, finely granular porcelaneous and waxy globular aggregates. Transparent, pale pink, columnar to tabular sixling twins up to 1 cm in diameter have been found, and aggregates of stacked stellate sixling twins consisting of transparent, pale yellow tabular crystals. Another Canadian occurrence is at Nepean, Ontario, in vein deposits in limestone. Germany Commercially important deposits occur in marls in Westphalia, and it is also found with zeolites at Oberschaffhausen, Bötzingen, Kaiserstuhl, Baden-Württemberg. India In Trichy (Tiruchirappalli; Tiruchi), Tiruchirapalli District, Tamil Nadu, it occurs with celestine SrSO4, gypsum and phosphate nodules in clay. Mexico It occurs in the Sierra Mojada District, with celestine in a lead-silver deposit. Russia It occurs in the Kirovskii apatite mine, Kukisvumchorr Mt, Khibiny Massif, Kola Peninsula, Murmanskaja Oblast', Northern Region, in late hydrothermal assemblages in cavities in pegmatites, associated with kukharenkoite-(La), microcline, albite, calcite, nenadkevichite, hilairite, catapleiite, donnayite-(Y), synchysite-(Ce), pyrite and others. It also occurs at Yukspor Mountain, Khibiny Massif, Kola Peninsula, Murmanskaja Oblast', Northern Region, in an aegerine-natrolite-microcline vein in foyaite, associated with aegirine, anatase, ancylite-(Ce), barylite, catapleiite, cerite-(Ce), cerite-(La), chabazite-(Ca), edingtonite, fluorapatite, galena, ilmenite, microcline, natrolite, sphalerite and vanadinite. At the same locality it was found in alkaline pegmatite veins associated with clinobarylite, natrolite, aegirine, microcline, catapleiite, fluorapatite, titanite, fluorite, galena, sphalerite, annite, astrophyllite, lorenzenite, labuntsovite-Mn, kuzmenkoite-Mn, cerite-(Ce), edingtonite, ilmenite and calcite. United States In the Gulf coast of Louisiana and Texas, strontianite occurs with celestine in calcite cap rock of salt domes. At the Minerva Number 1 Mine (Ozark-Mahoning Number 1 Mine) Ozark-Mahoning Group, Cave-in-Rock, Illinois, in the Kentucky Fluorspar District, Hardin County strontanite occurs as white, brown or rarely pink tufts and bowties of acicular crystals with slightly curved terminations. In the Silurian Lockport Group, Central and Western New York strontianite is observed in cavities in eastern Lockport, where it occurs as small white radiating sprays of acicular crystals. In Schoharie County, New York, it occurs in geodes and veins with celestine and calcite in limestone, and in Mifflin County, Pennsylvania, it occurs with aragonite, again in limestone. See also Strontian process References External links JMol Lochaber Strontium minerals Carbonate minerals Aragonite group Orthorhombic minerals Minerals in space group 62 Luminescent minerals Geology of Scotland Minerals described in 1791
Strontianite
[ "Chemistry" ]
2,681
[ "Luminescence", "Luminescent minerals" ]
59,456
https://en.wikipedia.org/wiki/Hydrate
In chemistry, a hydrate is a substance that contains water or its constituent elements. The chemical state of the water varies widely between different classes of hydrates, some of which were so labeled before their chemical structure was understood. Chemical nature Inorganic chemistry Hydrates are inorganic salts "containing water molecules combined in a definite ratio as an integral part of the crystal" that are either bound to a metal center or that have crystallized with the metal complex. Such hydrates are also said to contain water of crystallization or water of hydration. If the water is heavy water in which the constituent hydrogen is the isotope deuterium, then the term deuterate may be used in place of hydrate. A colorful example is cobalt(II) chloride, which turns from blue to red upon hydration, and can therefore be used as a water indicator. The notation "hydrated compound⋅n", where n is the number of water molecules per formula unit of the salt, is commonly used to show that a salt is hydrated. The n is usually a low integer, though it is possible for fractional values to occur. For example, in a monohydrate n = 1, and in a hexahydrate n = 6. Numerical prefixes mostly of Greek origin are: A hydrate that has lost water is referred to as an anhydride; the remaining water, if any exists, can only be removed with very strong heating. A substance that does not contain any water is referred to as anhydrous. Some anhydrous compounds are hydrated so easily that they are said to be hygroscopic and are used as drying agents or desiccants. Organic chemistry In organic chemistry, a hydrate is a compound formed by the hydration, i.e. "Addition of water or of the elements of water (i.e. H and OH) to a molecular entity". For example: ethanol, , is the product of the hydration reaction of ethene, , formed by the addition of H to one C and OH to the other C, and so can be considered as the hydrate of ethene. A molecule of water may be eliminated, for example, by the action of sulfuric acid. Another example is chloral hydrate, , which can be formed by reaction of water with chloral, . Many organic molecules, as well as inorganic molecules, form crystals that incorporate water into the crystalline structure without chemical alteration of the organic molecule (water of crystallization). The sugar trehalose, for example, exists in both an anhydrous form (melting point 203 °C) and as a dihydrate (melting point 97 °C). Protein crystals commonly have as much as 50% water content. Molecules are also labeled as hydrates for historical reasons not covered above. Glucose, , was originally thought of as and described as a carbohydrate. Hydrate formation is common for active ingredients. Many manufacturing processes provide an opportunity for hydrates to form and the state of hydration can be changed with environmental humidity and time. The state of hydration of an active pharmaceutical ingredient can significantly affect the solubility and dissolution rate and therefore its bioavailability. Clathrate hydrates Clathrate hydrates (also known as gas hydrates, gas clathrates, etc.) are water ice with gas molecules trapped within; they are a form of clathrate. An important example is methane hydrate (also known as gas hydrate, methane clathrate, etc.). Nonpolar molecules such as methane can form clathrate hydrates with water, especially under high pressure. Although there is no hydrogen bonding between water and guest molecules when methane is the guest molecule of the clathrate, guest–host hydrogen bonding often forms when the guest is a larger organic molecule such as tetrahydrofuran. In such cases the guest–host hydrogen bonds result in the formation of L-type Bjerrum defects in the clathrate lattice. Stability The stability of hydrates is generally determined by the nature of the compounds, their temperature, and the relative humidity (if they are exposed to air). See also Efflorescence Hygroscopy Mineral hydration Water of crystallization Hemihydrate Hydrous oxide References Hydrates
Hydrate
[ "Chemistry" ]
892
[ "Hydrates" ]
59,465
https://en.wikipedia.org/wiki/Jeffery%20Amherst%2C%201st%20Baron%20Amherst
Field Marshal Jeffery Amherst, 1st Baron Amherst, (29 January 1717 – 3 August 1797) was a British Army officer and Commander-in-Chief of the Forces in the British Army. Amherst is credited as the architect of Britain's successful campaign to conquer the territory of New France during the Seven Years' War. Under his command, British forces captured the cities of Louisbourg, Quebec City and Montreal, as well as several major fortresses. He was also the first British governor general in the territories that eventually became Canada. Numerous places and streets are named after him, in both Canada and the United States. Amherst's legacy is controversial due to his expressed desire to spread smallpox among the disaffected tribes of Indians during Pontiac's War. This has led to a reconsideration of his legacy. In 2019, the city of Montreal removed his name from a street, renaming it Rue Atateken, from the Kanien'kéha Mohawk language. Early life Born the son of Jeffrey Amherst (d. 1750), a Kentish lawyer, and Elizabeth Amherst (née Kerrill), Jeffery Amherst was born in Sevenoaks, England, on 29 January 1717. At an early age, he became a page to the Duke of Dorset. Amherst became an ensign in the Grenadier Guards in 1735. Amherst served in the War of the Austrian Succession becoming an aide to General John Ligonier and participating in the Battle of Dettingen in June 1743 and the Battle of Fontenoy in May 1745. Promoted to lieutenant colonel on 25 December 1745, he also saw action at the Battle of Rocoux in October 1746. He then became an aide to the Duke of Cumberland, the commander of the British forces, and saw further action at the Battle of Lauffeld in July 1747. Seven Years' War Germany In February 1756, Amherst was appointed commissar to the Hessian forces that had been assembled to defend Hanover as part of the Army of Observation: as it appeared likely a French invasion attempt against Britain itself was imminent, Amherst was ordered in April to arrange the transportation of thousands of the Germans to southern England to bolster Britain's defences. He was made colonel of the 15th Regiment of Foot on 12 June 1756. By 1757 as the immediate danger to Britain had passed the troops were moved back to Hanover to join a growing army under the Duke of Cumberland and Amherst fought with the Hessians under Cumberland's command at the Battle of Hastenbeck in July 1757: the Allied defeat there forced the army into a steady retreat northwards to Stade near the North Sea coast. Amherst was left dispirited by the retreat and by the Convention of Klosterzeven by which Hanover agreed to withdraw from the war: he began to prepare to disband the Hessian troops under his command, only to receive word that the Convention had been repudiated and the Allied force was being reformed. French and Indian War Amherst gained fame during the Seven Years' War, particularly in the North American campaign known in the United States as the French and Indian War when he led the British attack on Louisbourg on Cape Breton Island in June 1758. In the wake of this action, he was appointed commander-in-chief of the British army in North America and colonel-in-chief of the 60th (Royal American) Regiment in September 1758. Amherst then led an army against French troops on Lake Champlain, where he captured Fort Ticonderoga in July 1759, while another army under William Johnson took Niagara also in July 1759 and James Wolfe besieged and eventually captured Quebec with a third army in September 1759. Amherst served as the nominal Crown Governor of Virginia from 12 September 1759. From July 1760, Amherst led an army down the Saint Lawrence River from Fort Oswego, joined with Brigadier Murray from Quebec and Brigadier Haviland from Ile-aux-Noix in a three-way pincer, and captured Montreal, ending French rule in North America on 8 September. He infuriated the French commanders by refusing them the honours of war; the Chevalier de Lévis burned the colours rather than surrendering them, to highlight his differences with Vaudreuil for later political advantage back in France. The British settlers were relieved and proclaimed a day of thanksgiving. Boston newspapers recount how the occasion was celebrated with a parade, a grand dinner in Faneuil Hall, music, bonfires, and firing of cannon. The Rev. Thomas Foxcroft of the First Church in Boston offered thus: In recognition of this victory, Amherst was appointed Governor-General of British North America in September 1760 and promoted to major-general on 29 November 1760. He was appointed Knight of the Order of the Bath on 11 April 1761. From his base at New York, Amherst oversaw the dispatch of troops under Monckton and Haviland to take part in British expeditions in the West Indies that led to the British capture of Dominica in 1761 and Martinique and Cuba in 1762. Pontiac's War The uprising of many Native American tribes in the Ohio Valley and Great Lakes region, commonly referred to as Pontiac's War after one of its most notable leaders, began in early 1763. From 1753, when the French first invaded the territory, to February 1763, when peace was formally declared between the English and French, the Six Nations and tenant tribes always maintained that both the French and the British must remain east of the Allegheny Mountains. After the British failed to keep their word to withdraw from the Ohio and Allegheny valleys, a loose confederation of Native American tribes including the Delawares, the Shawnees, the Senecas, the Mingoes, the Mohicans, the Miamis, the Ottawas and the Wyandots, who were opposed to British post-war occupation of the region, banded together in an effort to drive the British out of their territory. One of the most infamous and well-documented issues during Pontiac's War was the use of biological warfare against Native Americans and Amherst's role in supporting it. Colonel Henry Bouquet, the commander of Fort Pitt, ordered smallpox-infested blankets to be given Native Americans when a group of them laid siege to the fortification in June 1763. During a parley in midst of the siege on 24 June 1763, Captain Simeon Ecuyer gave representatives of the besieging Delawares two blankets and a handkerchief enclosed in small metal boxes that had been exposed to smallpox, in an attempt to spread the disease to the Natives in order to end the siege. William Trent, the trader turned militia commander who had come up with the plan, sent an invoice to the British colonial authorities in North America indicating that the purpose of giving the blankets was "to Convey the Smallpox to the Indians." The invoice was approved by Thomas Gage, then serving as Commander-in-Chief, North America. Reporting on parleys with Delaware chiefs on 24 June, Trent wrote: '[We] gave them two Blankets and an Handkerchief out of the Small Pox Hospital. I hope it will have the desired effect.' The military hospital records confirm that two blankets and handkerchiefs were 'taken from people in the Hospital to Convey the Smallpox to the Indians.' The fort commander paid for these items, which he certified 'were had for the uses above mentioned.' A reported outbreak that began the spring before left as many as one hundred Native Americans dead in Ohio Country from 1763 to 1764. It is not clear, however, whether the smallpox was a result of the Fort Pitt incident or the virus was already present among the Delaware people as outbreaks happened on their own every dozen or so years and the delegates were met again later and they seemingly had not contracted smallpox. A month later the use of smallpox blankets was discussed by Amherst himself in letters to Bouquet. Amherst, having learned that smallpox had broken out among the garrison at Fort Pitt, and after learning of the loss of his forts at Venango, Le Boeuf and Presqu'Isle, wrote to Colonel Bouquet: Could it not be contrived to send the small pox among the disaffected tribes of Indians? We must on this occasion use every stratagem in our power to reduce them. Bouquet, who was already marching to relieve Fort Pitt from the siege, agreed with this suggestion in a postscript when he responded to Amherst just days later on 13 July 1763: P.S. I will try to inocculate the Indians by means of Blankets that may fall in their hands, taking care however not to get the disease myself. As it is pity to oppose good men against them, I wish we could make use of the Spaniard's Method, and hunt them with English Dogs. Supported by Rangers, and some Light Horse, who would I think effectively extirpate or remove that Vermine. In response, also in a postscript, Amherst replied: P.S. You will Do well to try to Innoculate the Indians by means of Blankets, as well as to try Every other method that can serve to Extirpate this Execrable Race. I should be very glad your Scheme for Hunting them Down by Dogs could take Effect, but England is at too great a Distance to think of that at present. Amherst was summoned home, ostensibly so that he could be consulted on future military plans in North America, and was replaced pro tem as Commander-in-Chief, North America by Thomas Gage. Amherst expected to be praised for his conquest of Canada, however, once in London, he was instead asked to account for the recent Native American rebellion. He was forced to defend his conduct, and faced complaints made by William Johnson and George Croghan, who lobbied the Board of Trade for his removal and permanent replacement by Gage. He was also severely criticised by military subordinates on both sides of the Atlantic. Nevertheless, Amherst was promoted to lieutenant-general on 26 March 1765, and became colonel of the 3rd Regiment of Foot in November 1768. On 22 October 1772, Amherst was appointed Lieutenant-General of the Ordnance, and he soon gained the confidence of George III, who had initially hoped the position would go to a member of the Royal Family. On 6 November 1772, he became a member of the Privy Council. Commander-in-Chief American Revolutionary War Amherst was raised to the peerage on 14 May 1776, as Baron Amherst, of Holmesdale in the County of Kent. On 24 March 1778 he was promoted to full general and, in April 1778, he became Commander-in-Chief of the Forces, which gave him a seat in the Cabinet. In 1778, when the British commander in North America, William Howe, requested to be relieved, Amherst was considered as a replacement by the government: however, his insistence that it would require 75,000 troops to fully defeat the rebellion was not acceptable to the government, and Henry Clinton was instead chosen to take over from Howe in America. Following the British setback at Saratoga, Amherst successfully argued for a limited war in North America, keeping footholds along the coast, defending Canada, East and West Florida, and the West Indies while putting more effort into the war at sea. On 7 November 1778 the King and Queen visited Amherst at his home, Montreal Park, in Kent and on 24 April 1779 he became colonel of the 2nd Troop of Horse Grenadier Guards. A long-standing plan of the French had been the concept of an invasion of Great Britain which they hoped would lead to a swift end to the war if it was successful: in 1779 Spain entered the war on the side of France, and the increasingly depleted state of British home forces made an invasion more appealing and Amherst organised Britain's land defences in anticipation of the invasion which never materialised. Gordon Riots In June 1780, Amherst oversaw the British army as they suppressed the anti-Catholic Gordon Riots in London. After the outbreak of rioting Amherst deployed the small London garrison of Horse and Foot Guards as well as he could but was hindered by the reluctance of the civil magistrates to authorise decisive action against the rioters. Line troops and militia were brought in from surrounding counties, swelling the forces at Amherst's disposal to over 15,000, many of whom were quartered in tents in Hyde Park, and a form of martial law was declared, giving the troops the authority to fire on crowds if the Riot Act had first been read. Although order was eventually restored, Amherst was personally alarmed by the failure of the authorities to suppress the riots. In the wake of the Gordon Riots, Amherst was forced to resign as Commander-in-Chief in February 1782 and was replaced by Henry Conway. On 23 March 1782 he became captain and colonel of the 2nd Troop of Horse Guards. Later life French Revolutionary Wars On 8 July 1788, he became colonel of the 2nd Regiment of Life Guards and on 30 August 1788 he was created Baron Amherst (this time with the territorial designation of Montreal in the County of Kent) with a special provision that would allow this title to pass to his nephew (as Amherst was childless, the Holmesdale title became extinct upon his death). With the advent of the French Revolutionary Wars, Amherst was recalled as Commander-in-Chief of the Forces in January 1793: however is generally criticised for allowing the armed forces to slide into acute decline, a direct cause of the failure of the early campaigns in the Low Countries: Pitt the Younger said of him "his age, and perhaps his natural temper, are little suited to the activity and the energy which the present moment calls for". Horace Walpole called him "that log of wood whose stupidity and incapacity are past belief". "He allowed innumerable abuses to grow up in the army… He kept his command, though almost in his dotage, with a tenacity that cannot be too much censured". Family and death In 1753 he married Jane Dalison (1723–1765). Following her death he married Elizabeth Cary (1740–1830), daughter of Lieutenant General George Cary (1712–1792), who later became Lady Amherst of Holmesdale, on 26 March 1767. There were no children by either marriage. He retired from that post in February 1795, to be replaced by the Duke of York, and was promoted to the rank of field marshal on 30 July 1796. He retired to his home at Montreal Park and died on 3 August 1797. He was buried in the Parish Church at Sevenoaks. Legacy Several places are named for him: Amherst Island, Ontario, Amherstburg, Ontario (location of General Amherst High School), Amherst, Massachusetts (location of the University of Massachusetts Amherst, Hampshire College and Amherst College), Amherst, New Hampshire, Amherst, Nova Scotia, Amherst, New York and Amherst County, Virginia. Amherst's desire to exterminate the indigenous people is now viewed as a dark stain on his legacy and various agencies, municipalities and institutions have reconsidered the use of the name "Amherst". "The Un-Canadians", a 2007 article in The Beaver, includes Amherst in a list of people in the history of Canada who are considered contemptible by the authors, because he "supported plans of distributing smallpox-infested blankets to First Nations people". In 2008, Mi'kmaq spiritual leader John Joe Sark called the name of Fort Amherst Park of Prince Edward Island a "terrible blotch on Canada", and said: "To have a place named after General Amherst would be like having a city in Jerusalem named after Adolf Hitler...it's disgusting." Sark raised his concerns again in a 29 January 2016 letter to the Canadian government. Mi'kmaq historian Daniel N. Paul, who referred to Amherst as motivated by white supremacist beliefs, also supports a name change, saying: "in the future I don't think there should ever be anything named after people who committed what can be described as crimes against humanity." In February 2016, a spokesperson for Parks Canada said it would review the matter after a proper complaint is filed; "Should there be a formal request from the public to change the name of the National Historic Site, Parks Canada would engage with the Historic Sites and Monuments Board of Canada for its recommendation." An online petition was launched by Sark to satisfy this formal request requirement on 20 February 2016. On 16 February 2018, the site was renamed Skmaqn–Port-la-Joye–Fort Amherst, adding a Mi'kmaq word alongside the French and English titles. In 2009, Montreal City Councillor Nicolas Montmorency officially asked that Rue Amherst be renamed: "it is totally unacceptable that a man who made comments supporting the extermination of Native Americans to be honoured in this way". On 13 September 2017, the city of Montreal decided that the street bearing his name would be renamed. On 21 June 2019, the street was officially renamed Rue Atateken, atateken being a Kanien'kehá word describing "those with whom one shares values," according to Kanehsatake historian Hilda Nicholas. Similarly, Rue Amherst in Gatineau was renamed Rue Wìgwàs (Anishinaabemowin for white birch) in 2023. In 2016, Amherst College dropped its "Lord Jeffery" mascot at the instigation of the students. It also renamed the Lord Jeffery Inn, a campus hotel owned by the college, to the Inn on Boltwood in early 2019. See also List of governors general of Canada Turtleheart Explanatory notes References Citations General bibliography Amherst, Jeffery (1931). The journal of Jeffery Amherst, recording the military career of General Amherst in America from 1758 to 1763 (Webster, John Clarence, ed) Toronto: The Ryerson Press; Chicago: University of Chicago Press Middleton, Richard, Pontiac's War: Its Causes, Course and Consequences New York and London, Routledge, 2007 Whitworth, Rex (February 1959). "Field-Marshal Lord Amherst: A Military Enigma" . History Today 9#2 pp. 132–137. External links Jeffery Amherst Collection at the Amherst College Archives & Special Collections Jeffery Amherst papers, William L. Clements Library, University of Michigan. Prof. Kevin Sweeney on Jeffery Amherst in America Historical Biographies: Jeffrey Amherst Amherst and Smallpox Amherst in the Haldimand Papers Amherst and Smallpox Blankets – Excerpts from actual letters in which Lord Jeffery Amherst approves smallpox plan (dated 16 July 1763) and discusses other methods of killing Native Americans with Colonel Henry Bouquet. Jeffrey Amherst and Smallpox Blankets – Extensive discussion and documentation of Amherst's involvement in warfare against Native Americans and the smallpox blanket tactics (University of Massachusetts Amherst) 1759 From the Warpath to the Plains of Abraham ( Virtual Exhibition) National Battlefields Commission (Plains of Abraham) 1717 births 1797 deaths Amherst County, Virginia Amherst, Nova Scotia Barons in the Peerage of Great Britain People related to biological warfare British Army personnel of the American Revolutionary War British Army personnel of the French and Indian War British Army personnel of the Seven Years' War British Army personnel of the War of the Austrian Succession British field marshals British Life Guards officers British people of Pontiac's War Buffs (Royal East Kent Regiment) officers Colonial governors of Virginia East Yorkshire Regiment officers Governors of British North America Governors of the Province of Quebec (1763–1791) Knights Companion of the Order of the Bath Peers of Great Britain created by George III People from Sevenoaks Military personnel from Kent Governors of Guernsey (1500–1835)
Jeffery Amherst, 1st Baron Amherst
[ "Biology" ]
3,947
[ "People related to biological warfare", "Biological warfare" ]
59,469
https://en.wikipedia.org/wiki/Linear%20cryptanalysis
In cryptography, linear cryptanalysis is a general form of cryptanalysis based on finding affine approximations to the action of a cipher. Attacks have been developed for block ciphers and stream ciphers. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis. The discovery is attributed to Mitsuru Matsui, who first applied the technique to the FEAL cipher (Matsui and Yamagishi, 1992). Subsequently, Matsui published an attack on the Data Encryption Standard (DES), eventually leading to the first experimental cryptanalysis of the cipher reported in the open community (Matsui, 1993; 1994). The attack on DES is not generally practical, requiring 247 known plaintexts. A variety of refinements to the attack have been suggested, including using multiple linear approximations or incorporating non-linear expressions, leading to a generalized partitioning cryptanalysis. Evidence of security against linear cryptanalysis is usually expected of new cipher designs. Overview There are two parts to linear cryptanalysis. The first is to construct linear equations relating plaintext, ciphertext and key bits that have a high bias; that is, whose probabilities of holding (over the space of all possible values of their variables) are as close as possible to 0 or 1. The second is to use these linear equations in conjunction with known plaintext-ciphertext pairs to derive key bits. Constructing linear equations For the purposes of linear cryptanalysis, a linear equation expresses the equality of two expressions which consist of binary variables combined with the exclusive-or (XOR) operation. For example, the following equation, from a hypothetical cipher, states the XOR sum of the first and third plaintext bits (as in a block cipher's block) and the first ciphertext bit is equal to the second bit of the key: In an ideal cipher, any linear equation relating plaintext, ciphertext and key bits would hold with probability 1/2. Since the equations dealt with in linear cryptanalysis will vary in probability, they are more accurately referred to as linear approximations. The procedure for constructing approximations is different for each cipher. In the most basic type of block cipher, a substitution–permutation network, analysis is concentrated primarily on the S-boxes, the only nonlinear part of the cipher (i.e. the operation of an S-box cannot be encoded in a linear equation). For small enough S-boxes, it is possible to enumerate every possible linear equation relating the S-box's input and output bits, calculate their biases and choose the best ones. Linear approximations for S-boxes then must be combined with the cipher's other actions, such as permutation and key mixing, to arrive at linear approximations for the entire cipher. The piling-up lemma is a useful tool for this combination step. There are also techniques for iteratively improving linear approximations (Matsui 1994). Deriving key bits Having obtained a linear approximation of the form: we can then apply a straightforward algorithm (Matsui's Algorithm 2), using known plaintext-ciphertext pairs, to guess at the values of the key bits involved in the approximation. For each set of values of the key bits on the right-hand side (referred to as a partial key), count how many times the approximation holds true over all the known plaintext-ciphertext pairs; call this count T. The partial key whose T has the greatest absolute difference from half the number of plaintext-ciphertext pairs is designated as the most likely set of values for those key bits. This is because it is assumed that the correct partial key will cause the approximation to hold with a high bias. The magnitude of the bias is significant here, as opposed to the magnitude of the probability itself. This procedure can be repeated with other linear approximations, obtaining guesses at values of key bits, until the number of unknown key bits is low enough that they can be attacked with brute force. See also Piling-up lemma Differential cryptanalysis References External links Linear Cryptanalysis of DES A Tutorial on Linear and Differential Cryptanalysis Linear Cryptanalysis Demo A tutorial on linear (and differential) cryptanalysis of block ciphers "Improving the Time Complexity of Matsui's Linear Cryptanalysis", improves the complexity thanks to the Fast Fourier Transform Cryptographic attacks
Linear cryptanalysis
[ "Technology" ]
900
[ "Cryptographic attacks", "Computer security exploits" ]
59,473
https://en.wikipedia.org/wiki/Chesapeake%20Bay
The Chesapeake Bay ( ) is the largest estuary in the United States. The bay is located in the Mid-Atlantic region and is primarily separated from the Atlantic Ocean by the Delmarva Peninsula, including parts of the Eastern Shore of Maryland, the Eastern Shore of Virginia, and the state of Delaware. The mouth of the bay at its southern point is located between Cape Henry and Cape Charles. With its northern portion in Maryland and the southern part in Virginia, the Chesapeake Bay is a very important feature for the ecology and economy of those two states, as well as others surrounding within its watershed. More than 150 major rivers and streams flow into the bay's drainage basin, which covers parts of six states (New York, Pennsylvania, Delaware, Maryland, Virginia, and West Virginia) and all of Washington, D.C. The bay is approximately long from its northern headwaters in the Susquehanna River to its outlet in the Atlantic Ocean. It is wide at its narrowest (between Kent County's Plum Point near Newtown in the east and the Harford County western shore near Romney Creek) and at its widest (just south of the mouth of the Potomac River which divides Maryland from Virginia). Total shoreline including tributaries is , circumnavigating a surface area of . Average depth is , reaching a maximum of . The bay is spanned twice, in Maryland by the Chesapeake Bay Bridge from Sandy Point (near Annapolis) to Kent Island and in Virginia by the Chesapeake Bay Bridge–Tunnel connecting Virginia Beach to Cape Charles. Known for both its beauty and bounty, the bay has become "emptier", with fewer crabs, oysters and watermen (fishermen) since the mid-20th century. Nutrient pollution and urban runoff have been identified as major components of impaired water quality in the bay stressing ecosystems and compounding the decline of shellfish due to overharvesting. Restoration efforts that began in the 1990s have continued into the 21st century and show potential for growth of the native oyster population. The health of the Chesapeake Bay improved in 2015, marking three years of gains over a four-year period. Slight improvements in water quality were observed in 2021, compared to indicators measured in 2020. The bay is experiencing other environmental concerns, including climate change which is causing sea level rise that erodes coastal areas and infrastructure and changes to the marine ecosystem. Etymology The word is an Algonquian word referring to a village 'at a big river'. It is the seventh-oldest surviving English placename in the United States, first applied as Chesepiook by explorers heading north from the Roanoke Colony into a Chesapeake tributary in 1585 or 1586. The name may also refer to the Chesapeake people or the Chesepian, a Native American tribe who inhabited the area now known as South Hampton Roads in the U.S. state of Virginia. They occupied an area that is now the Norfolk, Portsmouth, Chesapeake, and Virginia Beach areas. In 2005, Algonquian linguist Blair Rudes "helped to dispel one of the area's most widely held beliefs: that 'Chesapeake' means something like 'great shellfish bay'. It does not, Rudes said. The name might have actually meant something like 'great water', or it might have just referred to a village location at the bay's mouth." Physical geography Geology and formation The Chesapeake Bay is an estuary to the North Atlantic, lying between the Delmarva Peninsula to the east and the North American mainland to the west. It is the ria, or drowned valley, of the Susquehanna River, meaning that it was the alluvial plain where the river flowed when the sea level was lower. It is not a fjord, because the Laurentide Ice Sheet never reached as far south as the northernmost point on the bay. North of Baltimore, the western shore borders the hilly Piedmont region of Maryland; south of the city the bay lies within the state's low-lying coastal plain, with sedimentary cliffs to the west, and flat islands, winding creeks and marshes to the east. The large rivers entering the bay from the west have broad mouths and are extensions of the main ria for miles up the course of each river. The bay's geology, its present form, and its very location were created by a bolide impact event at the end of the Eocene (about 35.5 million years ago), forming the Chesapeake Bay impact crater and much later the Susquehanna River valley. The bay was formed starting about 10,000 years ago when rising sea levels at the end of the last ice age flooded the Susquehanna River valley. Parts of the bay, especially the Calvert County, Maryland, coastline, are lined by cliffs composed of deposits from receding waters millions of years ago. These cliffs, generally known as Calvert Cliffs, are famous for their fossils, especially fossilized shark teeth, which are commonly found washed up on the beaches next to the cliffs. Scientists' Cliffs is a beach community in Calvert County named for the desire to create a retreat for scientists when the community was founded in 1935. Hydrology Much of the bay is shallow. At the point where the Susquehanna River flows into the bay, the average depth is , although this soon diminishes to an average of southeast of the city of Havre de Grace, Maryland, to about just north of Annapolis. On average, the depth of the bay is , including tributaries; over 24 percent of the bay is less than deep. Because the bay is an estuary, it has fresh water, salt water and brackish water. Brackish water has three salinity zones: oligohaline, mesohaline, and polyhaline. The freshwater zone runs from the mouth of the Susquehanna River to north Baltimore. The oligohaline zone has very little salt. Salinity varies from 0.5 ppt (parts per thousand) to 10 ppt, and freshwater species can survive there. The north end of the oligohaline zone is north Baltimore and the south end is the Chesapeake Bay Bridge. The mesohaline zone has a medium amount of salt and runs from the Bay Bridge to the mouth of the Rappahannock River. Salinity there ranges from 1.07% to 1.8%. The polyhaline zone is the saltiest zone, and some of the water can be as salty as sea water. It runs from the mouth of the Rappahannock River to the mouth of the bay. The salinity ranges from 1.87% to 3.6%. (3.6% is as salty as the ocean.) The climate of the area surrounding the bay is primarily humid subtropical, with hot, very humid summers and cold to mild winters. Only the area around the mouth of the Susquehanna River is continental in nature, and the mouth of the Susquehanna River and the Susquehanna flats often freeze in winter. It is rare for the surface of the bay to freeze in winter, something that happened most recently in the winter of 1976–77. The Chesapeake Bay is the end point of over 150 rivers and streams. The largest rivers flowing directly into the bay, in order of discharge, are: Susquehanna River Potomac River James River Rappahannock River York River Patuxent River Choptank River For more information on Chesapeake Bay rivers, see the List of Chesapeake Bay rivers. Flora and fauna The Chesapeake Bay is home to numerous fauna that either migrate to the bay at some point during the year or live there year-round. There are over 300 species of fish and numerous shellfish and crab species. Some of these include the Atlantic menhaden, striped bass, American eel, eastern oyster, Atlantic horseshoe crab, and the blue crab. Birds include ospreys, great blue herons, bald eagles, and peregrine falcons, the last two of which were threatened by DDT; their numbers plummeted but have risen in recent years. The piping plover is a near threatened species that inhabits the wetlands. Larger fish such as Atlantic sturgeon, varieties of sharks, and stingrays visit the Chesapeake Bay. The waters of the Chesapeake Bay have been regarded as one of the most important nursery areas for sharks along the east coast. Megafaunas such as bull sharks, tiger sharks, scalloped hammerhead sharks, and basking sharks and manta rays are also known to visit. Smaller species of sharks and stingrays that are known to be regular to occasional residents in the bay include the smooth dogfish, spiny dogfish, cownose ray, and bonnethead. Bottlenose dolphins are known to live seasonally/yearly in the bay. There have been unconfirmed sightings of humpback whales in recent years. Endangered North Atlantic right whale and fin, and minke and sei whales have also been sighted within and in the vicinity of the bay. A male manatee visited the bay several times between 1994 and 2011, even though the area is north of the species' normal range. The manatee, recognizable due to distinct markings on its body, was nicknamed "Chessie" after a legendary sea monster that was allegedly sighted in the bay during the 20th century. The same manatee has been spotted as far north as Rhode Island, and was the first manatee known to travel so far north. Other manatees are occasionally seen in the bay and its tributaries, which contain sea grasses that are part of the manatee's diet. Loggerhead turtles are known to visit the bay. The Chesapeake Bay is also home to a diverse flora, both land and aquatic. Common submerged aquatic vegetation includes eelgrass and widgeon grass. A report in 2011 suggested that information on underwater grasses would be released, because "submerged grasses provide food and habitat for a number of species, adding oxygen to the water and improving water clarity." Other vegetation that makes its home in other parts of the bay are wild rice, various trees like the red maple, loblolly pine and bald cypress, and spartina grass and phragmites. Invasive plants have taken a significant foothold in the bay. Plants such as Phragmites, Purple loosestrife and Japanese stiltgrass have established high levels of permanency in Chesapeake wetlands. Additionally, plants such as Brazilian waterweed, native to South America, have spread to most continents with the help of aquarium owners, who often dump the contents of their aquariums into nearby lakes and streams. It is highly invasive and has the potential to flourish in the low-salinity tidal waters of the Chesapeake Bay. Dense stands of Brazilian waterweed can restrict water movement, trap sediment and affect water quality. Various local K-12 schools in the Maryland and Virginia region often have programs that cultivate native bay grasses and plant them in the bay. History Pre-Columbian It is presumed that the Chesapeake Bay was once inhabited by Paleoindians 11,000 years ago. For thousands of years, Native American societies lived in villages of wooden longhouses close to water bodies where they fished and farmed the land. Agricultural products included beans, corn, tobacco, and squash. Villages often lasted between 10 and 20 years before being abandoned due to local resources such as firewood running out or soil depleting. To produce enough food, labor was divided with men hunting while the women supervised the village's farming. All village members took part in the harvesting of fish and shellfish from the local bodies of water. As time went on, communities around the Chesapeake Bay formed confederations such as the Powhatan, the Piscataway, and the Nanticoke. Each of these confederations consisted of a collection of smaller tribes falling under the leadership of a central chief. European exploration and settlement In 1524, Italian explorer Giovanni da Verrazzano, (1485–1528), in service of the French crown, (famous for sailing through and thereafter naming the entrance to New York Bay as the "Verrazzano Narrows", including now in the 20th century, a suspension bridge also named for him) sailed past the Chesapeake, but did not enter the bay. Spanish explorer Lucas Vásquez de Ayllón sent an expedition out from Hispaniola in 1525 that reached the mouths of the Chesapeake and Delaware Bays. It may have been the first European expedition to explore parts of the Chesapeake Bay, which the Spaniards called "Bahía de Santa María" ("Bay of St. Mary") or "Bahía de Madre de Dios."("Bay of the Mother of God") De Ayllón established a short-lived Spanish mission settlement, San Miguel de Gualdape, in 1526 along the Atlantic coast. Many scholars doubt the assertion that it was as far north as the Chesapeake; most place it in present-day Georgia's Sapelo Island. In 1573, Pedro Menéndez de Márquez, the governor of Spanish Florida, conducted further exploration of the Chesapeake. In 1570, Spanish Jesuits established the short-lived Ajacan Mission on one of the Chesapeake tributaries in present-day Virginia. The arrival of English colonists under Sir Walter Raleigh and Humphrey Gilbert in the late 16th century to found a colony, later settled at Roanoke Island (off the present-day coast of North Carolina) for the Virginia Company, marked the first time that the English approached the gates to the Chesapeake Bay between the capes of Cape Charles and Cape Henry. Three decades later, in 1607, Europeans again entered the bay. Captain John Smith of England explored and mapped the bay between 1607 and 1609, resulting in the publication in 1612 back in the British Isles of "A Map of Virginia". Smith wrote in his journal: "Heaven and earth have never agreed better to frame a place for man's habitation." The Captain John Smith Chesapeake National Historic Trail, the first designated "all-water" National Historic Trail in the US, was established in 2006 by the National Park Service. The trail follows the route of Smith's historic 17th-century voyage. Because of economic hardships and civil strife in the "Mother Land", there was a mass migration of southern English Cavaliers and their servants to the Chesapeake Bay region between 1640 and 1675, to both of the new colonies of the Province of Virginia and the Province of Maryland. American Revolution to the present The Chesapeake Bay was the site of the Battle of the Chesapeake (also known as the "Battle of the Capes", Cape Charles and Cape Henry) in 1781, during which the French fleet defeated the Royal Navy in the decisive naval battle of the American Revolutionary War. The French victory enabled General George Washington and his French allies under the Comte de Rochambeau to march down from New York and bottle up a British army under Lord Cornwallis from the North and South Carolinas at the siege of Battle of Yorktown in Yorktown, Virginia. Their marching route from Newport, Rhode Island through Connecticut, New York State, Pennsylvania, New Jersey and Delaware to the "Head of Elk" by the Susquehanna River along the shores and also partially sailing down the bay to Virginia. It is also the subject of a designated National Historic Trail as the Washington–Rochambeau Revolutionary Route. The bay would again see conflict during War of 1812. During the year of 1813, from their base on Tangier Island, British naval forces under the command of Admiral George Cockburn raided several towns on the shores of the Chesapeake, treating the bay as if it were a "British Lake". The Chesapeake Bay Flotilla, a fleet of shallow-draft armed barges under the command of U.S. Navy Commodore Joshua Barney, was assembled to stall British shore raids and attacks. After months of harassment by Barney, the British landed on the west side of the Patuxent at Benedict, Maryland, the Chesapeake Flotilla was scuttled, and the British trekked overland to rout the U.S. Army at Bladensburg and burn the U.S. Capitol in August 1814. A few days later in a "pincer attack", they also sailed up the Potomac River to attack Fort Washington below the National Capital and raided the nearby port town of Alexandria, Virginia. There were so-called "Oyster Wars" in the late 19th and early 20th centuries. Until the mid-20th century, oyster harvesting rivaled the crab industry among Chesapeake watermen, a dwindling breed whose skipjacks and other workboats were supplanted by recreational craft in the latter part of the century. In the 1960s, the Calvert Cliffs Nuclear Power Plant on the historic Calvert Cliffs in Calvert County on the Western Shore of Maryland began using water from the bay to cool its reactor. Navigation The Chesapeake Bay forms a link in the Intracoastal Waterway, of the bays, sounds and inlets between the off-shore barrier islands and the coastal mainland along the Atlantic coast connecting the Chesapeake and Delaware Canal (linking the bay to the north and the Delaware River) with the Albemarle and Chesapeake Canal (linking the bay, to the south, via the Elizabeth River, by the cities of Norfolk and Portsmouth to the Albemarle Sound and Pamlico Sound in North Carolina and further to the Sea Islands of Georgia). A busy shipping channel (dredged by the U.S. Army Corps of Engineers since the 1850s) runs the length of the bay, is an important transit route for large vessels entering or leaving the Port of Baltimore, and further north through the Chesapeake and Delaware Canal to the ports of Wilmington and Philadelphia on the Delaware River. During the later half of the 19th century and the first half of the 20th century, the bay was plied by passenger steamships and packet boat lines connecting the various cities on it, notably the Baltimore Steam Packet Company ("Old Bay Line"). In the later 20th century, a series of road crossings were built. One, the Chesapeake Bay Bridge (also known as the Governor William Preston Lane Jr. Memorial Bridge) between the state capital of Annapolis, Maryland and Matapeake on the Eastern Shore, crossing Kent Island, was constructed 1949–1952. A second, parallel, span was added in 1973. The Chesapeake Bay Bridge–Tunnel, connecting Virginia's Eastern Shore with its mainland (at the metropolitan areas of Virginia Beach, Norfolk, Portsmouth, and Chesapeake), is approximately long; it has trestle bridges as well as two stretches of tunnels that allow unimpeded shipping; the bridge is supported by four man-made islands. The Chesapeake Bay Bridge–Tunnel was opened for two lanes in 1964 and four lanes in 1999. Tides Tides in the Chesapeake Bay exhibit an interesting and unique behavior due to the nature of the topography (both horizontal and vertical shape), wind-driven circulation, and how the bay interacts with oceanic tides. Research into the peculiar behavior of tides both at the northern and southern extents of the bay began in the late 1970s. One study noted sea level fluctuations at periods of 5 days, driven by sea level changes at the bay's mouth on the Atlantic coast and local lateral winds, and 2.5 days, caused by resonant oscillations driven by local longitudinal winds, while another study later found that the geometry of the bay permits for a resonant period of 1.46 days. A good example of how the different Chesapeake Bay sites experience different tides can be seen in the tidal predictions published by the National Oceanographic and Atmospheric Administration (NOAA) (see figure at right). At the Chesapeake Bay Bridge–Tunnel (CBBT) site, which lies at the southernmost point of the bay where it meets the Atlantic Ocean near Norfolk, Virginia, and the capes of Charles and Henry, there is a distinct semi-diurnal tide throughout the lunar month, with small amplitude modulations during spring (new/full moon) vs. neap (one/three quarter moon) tidal periods. The main forcing of the CBBT tides are typical, semi-diurnal ocean tides that the East Coast of the United States experiences. Baltimore, in the northern portion of the bay, experiences a noticeable modulation to form its mixed tidal nature during spring vs. neap tides. Spring tides, when the sun-earth-moon system forms a line, cause the largest tidal amplitudes during lunar monthly tidal variations. In contrast, neap tides, when the sun-earth-moon system forms a right angle, are muted, and in a semi-diurnal tidal system (such as that seen at the CBBT site) this can be seen as a lowest intertidal range. Two interesting points that arise from comparing these two sites at opposite ends of the bay are their tidal characteristics – semi-diurnal tide for CBBT and mixed tide for Baltimore (due to resonance in the bay) – and the differences in amplitude (due to dissipation in the bay). Economy Fishing industry The bay is well known for its seafood, especially blue crabs, clams, and oysters. In the middle of the 20th century, the bay supported 9,000 full-time watermen, according to one account. Today, the body of water is less productive than it used to be because of runoff from urban areas (mostly on the Western Shore) and farms (especially on the Eastern Shore and in the Susquehanna River watershed), over-harvesting, and invasion of foreign species. The plentiful oyster harvests led to the development of the skipjack (such as the Helen Virginia), the state boat of Maryland, which is the only remaining working boat type in the United States still under sail power. Other characteristic bay-area workboats include sail-powered boats such as the log canoe, the pungy, the bugeye, and the motorized Chesapeake Bay deadrise, the state boat of Virginia. In addition to harvesting wild oysters, oyster farming is a growing industry in the bay. Oyster aquaculture is passive in that the bay provides all the natural oyster food needed, making it an environmentally friendly practice in contrast to other kinds of fish farming. Oyster farms provide jobs as well as a natural effort for filtering excess nutrients from the water in an effort to reduce the effects of eutrophication pollution (too much algae). The Chesapeake Bay Program promotes oyster restoration projects to reduce the amount of nitrogen compounds entering the bay. The bay is famous for its rockfish, a regional name for striped bass. Once on the verge of extinction, rockfish have made a significant comeback because of legislative action that put a moratorium on rockfishing, which allowed the species to re-populate. Rockfish can now be fished in strictly controlled and limited quantities. Other popular recreational fisheries in the Chesapeake Bay include shad, cobia, croaker, and redfish, winter flounder, and summer flounder. Recently, non-native blue catfish have proliferated in tributaries like the James River and may be moving to other areas of the bay. A commercial fishery exists for menhaden, too oily for human consumption but instead used for bait, fish oil, and livestock feed. Tourism and recreation The Chesapeake Bay is a main feature for tourists who visit Maryland and Virginia each year. Fishing, crabbing, swimming, boating, kayaking, and sailing are extremely popular activities enjoyed on the waters of the Chesapeake Bay. As a result, tourism has a notable impact on Maryland's economy. One report suggested that Annapolis was an appealing spot for families, water sports and boating. Commentator Terry Smith spoke about the bay's beauty: One account suggested how the Chesapeake attracts people: The Chesapeake Bay plays an extremely important role in Maryland, Virginia, and Pennsylvania's economies, in addition to the ecosystem. The nature-based recreation of wildlife, boating, and ecotourism are dependent on enforcement of the Clean Water Act (CWA), which regulates pollutant discharges and supports related pollution control programs. In 2006, "roughly eight million wildlife watchers spent $636 million, $960 million, and $1.4 billion in Maryland, Virginia, and Pennsylvania" according to the Chesapeake Bay Foundation. Cuisine In colonial times, simple cooking techniques were used to create one pot meals like ham and potato casserole, clam chowder, or stews with common ingredients like oysters, chicken or venison. When John Smith landed in Chesapeake in 1608, he wrote: "The fish were so thick, we attempted to catch them with frying pans". Common regional ingredients in the local cuisine of Chesapeake included terrapins, smoked hams, blue crab, shellfish, local fish, game meats and various species of waterfowl. Blue crab continues to be an especially popular regional specialty. Environmental issues Pollution In the 1970s, the Chesapeake Bay was found to contain one of the planet's first identified marine dead zones, where waters were so depleted of oxygen that they were unable to support life, resulting in massive fish kills. In 2010 the bay's dead zones were estimated to kill 75,000 tons of bottom-dwelling clams and worms each year, weakening the base of the estuary's food chain and robbing the blue crab in particular of a primary food source. Crabs are sometimes observed to amass on shore to escape pockets of oxygen-poor water, a behavior known as a "crab jubilee". Hypoxia results in part from large algal blooms, which are nourished by the runoff of residential, farm and industrial waste throughout the watershed. A 2010 report criticized Amish farmers in Pennsylvania for raising cows with inadequate controls on the manure that they generate. Farms in Lancaster County, Pennsylvania generate large quantities of manure that washes into tributaries of the bay. The pollution entering the bay has multiple components that contribute to algal blooms, principally the nutrients phosphorus and nitrogen. The algae prevents sunlight from reaching the bottom of the bay while alive and deoxygenates the bay's water when it dies and rots. Soil erosion and runoff of sediment into the bay, exacerbated by devegetation, construction and the prevalence of pavement in urban and suburban areas, also block vital sunlight. The resulting loss of aquatic vegetation has depleted the habitat for much of the bay's animal life. Beds of eelgrass, the dominant variety in the southern Chesapeake Bay, have shrunk by more than half there since the early 1970s. Overharvesting, pollution, sedimentation and disease have turned much of the bay's bottom into a muddy wasteland. The principal sources of nutrient pollution in the bay are surface runoff from farms, as well as runoff from urban and suburban areas. About half of the nutrient pollutant loads in the bay are generated by manure and poultry litter. Extensive use of lawn fertilizers and air pollution from motor vehicles and power plants are also significant nutrient sources. One particularly harmful source of toxicity is Pfiesteria piscicida, which can affect both fish and humans. Pfiesteria caused a small regional panic in the late 1990s when a series of large blooms started killing large numbers of fish while giving swimmers mysterious rashes; nutrient runoff from chicken farms was blamed for the growth. Depletion of oysters While the bay's salinity is ideal for oysters and the oyster fishery was at one time the bay's most commercially viable, the population has in the last fifty years been devastated. Maryland once had roughly of oyster reefs. In 2008 there were about . It has been estimated that in pre-colonial times, oysters could filter the entirety of the bay in about 3.3 days; by 1988 this time had increased to 325 days. The harvest's gross value decreased 88% from 1982 to 2007. One report suggested the bay had fewer oysters in 2008 than 25 years earlier. The primary problem is overharvesting. Lax government regulations allow anyone with a license to remove oysters from state-owned beds, and although limits are set, they are not strongly enforced. The overharvesting of oysters has made it difficult for them to reproduce, which requires close proximity to one another. A second cause for the oyster depletion is that the drastic increase in human population caused a sharp increase in pollution flowing into the bay. The bay's oyster industry has also suffered from two diseases: MSX and Dermo. The depletion of oysters has had a particularly harmful effect on the quality of the bay. Oysters serve as natural water filters, and their decline has further reduced the water quality of the bay. Water that was once clear for meters is now so turbid that a wader may lose sight of his feet while his knees are still dry. Institutional responses to pollution problems Concern about the increasing discoveries of bay pollution problems, and of the institutional challenges of organizing bay restoration programs over a large geographical area, led to Congress directing the US Environmental Protection Agency (EPA) to take a greater role in studying the scientific and technical aspects of the problems beginning in the late 1970s. The agency conducted its research over a seven-year period and published a major report in 1983. The report stated that the bay was an "ecosystem in decline" and cited numerous instances of declines in the populations of oysters, crabs, freshwater fish and other wildlife. The growing concerns about pollution also prompted the legislatures of Maryland and Virginia to establish the Chesapeake Bay Commission, an advisory body, in 1980. The commission consults with the state legislatures and executive agencies, as well as Congress, about environmental, economic and social issues related to the bay. As an initial follow-up to the EPA report, the Chesapeake Bay Commission and EPA developed the Chesapeake Bay Agreement in 1983. The agreement was signed by the governors of Maryland, Virginia and Pennsylvania; the Mayor of the District of Columbia; and the EPA Administrator. The parties agreed to: Creation of an "Executive Council" consisting of cabinet-level appointees from each state and Washington, D.C., and the EPA Regional Administrator The council's creation of an implementation committee to coordinate technical issues and development management plans for bay restoration The establishment of the Chesapeake Bay Program as a liaison office for all of the participating organizations. The program's office, based in Annapolis, is partially funded by EPA and staffed by experts from the member states, EPA and other federal agencies, and academic institutions. Concurrent with the 1983 agreement EPA began providing matching grants to the bay states for research and restoration projects. In 1987 the parties agreed to set a goal of reducing the amount of nutrients that enter the bay by 40 percent by 2000. In 1992, the bay program partners agreed to continue the 40 percent reduction goal beyond 2000 and to attack nutrients at their source: upstream, in the bay tributaries. Restoration efforts Efforts of federal, state and local governments, working in partnership through the Chesapeake Bay Program along with the Chesapeake Bay Foundation and other nonprofit environmental groups, to restore or at least maintain the current water quality, have had mixed results. One particular obstacle to cleaning up the bay is that much of the polluting substances are discharged far upstream in states far removed from the bay: New York and Pennsylvania. Despite the State of Maryland spending over $100 million to restore the bay, conditions have continued to grow worse. In the mid-20th century, the bay supported over 6,000 oystermen. As of 2008, there were fewer than 500. In June 2000, the Chesapeake Bay Program adopted Chesapeake 2000, an agreement adopted by the member jurisdictions, intended to guide restoration activities throughout the Chesapeake Bay watershed through 2010. One component of this agreement was a series of upgrades to sewage treatment plants throughout the watershed. In 2016 EPA stated that the upgrades "have resulted in steep reductions in nitrogen and phosphorus pollution... despite increases in human population and wastewater volume." EPA published a series of scientific documents on water quality criteria for the bay between 2004 and 2010. The criteria documents, which describe specific pollutants and their effects on aquatic species, are used by the states to develop water quality standards (WQS) for individual water bodies. Delaware, Maryland, Virginia, and the District of Columbia adopted WQS for various Chesapeake Bay tributaries in the mid-2000s, referencing the EPA criteria documents, as well as their own extensive data gathering and modeling efforts. Restoration efforts that began in the 1990s have continued into the 21st century and show potential for growth of the native oyster population. Efforts to repopulate the bay using oyster hatcheries have been carried out by a group called the Oyster Recovery Partnership, with some success. In 2011 the group placed 6 million oysters on of the Trent Hall sanctuary. Scientists from the Virginia Institute of Marine Science at the College of William & Mary claim that experimental reefs created in 2004 now house 180 million native oysters, Crassostrea virginica, which is far fewer than the billions that once existed. Regulatory actions In 2009 the Chesapeake Bay Foundation (CBF) filed suit against EPA for its failure to finalize a total maximum daily load (TMDL) ruling for the bay, pursuant to the Clean Water Act. The TMDL would restrict water pollution from farms, land development, power plants and sewage treatment plants. EPA, which had been working with the states on various components of the TMDL since the 1980s (e.g. water quality criteria, standards for individual tributaries, improvements in data gathering and modeling techniques), agreed to settle the lawsuit and issued its TMDL for nitrogen, phosphorus and sediment pollution on December 29, 2010. This was the largest, most complex TMDL document that EPA had issued to date. The TMDL was challenged in litigation by the agriculture and construction industries, but EPA's document was upheld by the courts. In 2020 the CBF filed another lawsuit against EPA for its failure to require the states of New York and Pennsylvania to comply with their TMDL goals and reduce pollution in the bay. CBF and EPA reached a settlement on the lawsuit in 2023. EPA agreed to increase its efforts to reduce farm and stormwater runoff pollution in Pennsylvania, including compliance and enforcement actions. EPA's 2010 TMDL document requires all states in the bay watershed region to develop detailed implementation plans for pollutant reduction. The states have been developing their plans for years, in many cases building upon restoration projects that they had initiated before EPA's TMDL was finalized. These plans are long and complex, involving regular consultation with many stakeholders (i.e. governments, industry, agriculture, citizen groups). The plans include multiple milestone goals for project initiation or continued progress in water quality, through the use of pollution control upgrades (such as at sewage treatment plants) and more widespread utilization of various best management practices (BMPs). The BMPs are designed for specific sites to control pollution from nonpoint sources, principally agriculture, land development and urban runoff. For example, a farmer may install vegetated stream buffers along a stream bank to reduce runoff of sediment, nutrients and other pollutants. A land developer may install stormwater management facilities such as infiltration basins or constructed wetlands during the construction of housing or commercial buildings. In 2011 both Maryland and Virginia enacted laws to reduce the effects of lawn fertilizer use, by restricting nitrogen and phosphorus content. The Virginia law also banned deicers containing urea, nitrogen or phosphorus. Installation of stormwater management facilities is already a requirement for most new construction projects in the bay region, under various state and local government requirements. These facilities reduce erosion and keep sediment and other pollutants from entering tributaries and the bay. However retrofitting such facilities into existing developed areas is often expensive due to high land costs, or difficult to install among existing structures. As a result, the extent of such retrofit projects in the bay region has been limited. Water quality improvements In 2010 bay health improved slightly in terms of the overall health of its ecosystem, earning a rating of 31 out of 100, up from a 28 rating in 2008. An estimate in 2006 from a "blue ribbon panel" said cleanup costs would be $15 billion. Compounding the problem is that 100,000 new residents move to the area each year. A 2008 Washington Post report suggested that government administrators had overstated progress on cleanup efforts as a way to "preserve the flow of federal and state money to the project." In January 2011, there were reports that millions of fish had died, but officials suggested it was probably the result of extremely cold weather. The health of the Chesapeake Bay improved in 2015, marking three years of gains over a four-year period, according to a 2016 report by the University of Maryland Center for Environmental Science (UMCES). In 2021 scientists at the UMCES reported slight improvements in bay water quality compared to levels measured in 2020. The greatest improvements were seen in the lower bay areas, while the Patapsco River and Back River (Maryland) regions showed minimal improvement. Positive indicators included decreased nitrogen levels and increases in dissolved oxygen. The CBF reported that as of 2022 pollution control efforts in the bay have continued to show mixed results, with no improvement in levels of toxic contaminants, nitrogen and dissolved oxygen, and a small decrease in water clarity compared to 2020 levels (measured as Secchi depth). Oyster and rockfish populations in the bay have improved, but blue crab populations have continued to decline. In the 2023 annual report the University of Maryland Center for Environmental Science rated the Chesapeake Bay's overall health a C-plus or 55%, its highest score since 2002. Climate change The Chesapeake Bay is already experiencing the effects of climate change. Key among these is sea level rise: water levels in the bay have already risen one foot, with a predicted increase of 1.3 to 5.2 feet in the next 100 years. This has related environmental effects, causing changes in marine ecosystems, destruction of coastal marshes and wetlands, and intrusion of saltwater into otherwise brackish parts of the bay. Sea level rise also compounds the effects of extreme weather on the bay, making coastal flooding as part of the events more extreme and increasing runoff from upstream in the watershed. With increases in flooding events and sea level rise, the 11,600 miles of coastline, which include significant historic buildings and modern infrastructure, will be at risk of erosion. Islands such as Holland Island have disappeared due to the rising sea levels. Beyond sea level rise, other changes in the marine ecosystem due to climate change, such as ocean acidification and temperature increases, will put increasing pressure on marine life. Projected effects include decreasing dissolved oxygen, more acidic waters making it harder for shellfish to maintain shells and changing the seasonal cycles important for breeding and other lifecycle activities. Seasonal shifts and warmer temperatures also mean that there is a greater likelihood of pathogens to stay active in the ecosystem. Climate change may worsen hypoxia. However, compared to the current effects of nutrient pollution and algal blooms, climate change's effect to increase hypoxia is relatively small. Warmer waters can hold less dissolved oxygen. Therefore, as the bay warms, there may be a longer duration of hypoxia each summer season in the deep central channel of the bay. However, comparing the effects of climate change and nutrient pollution, reduced nutrient pollution would increase oxygen concentrations more dramatically than if climate change were to level out. Climate change adaptation and mitigation programs in Maryland and Virginia often include significant programs to address the communities in the Chesapeake Bay. Key infrastructure in Virginia, such as the port of Norfolk, and major agriculture and fishing industries of the Eastern Shore of Maryland will be directly impacted by the changes in the bay. Eutrophication and Hypoxia European settlers around the Chesapeake Bay in the late 17th and early 18th centuries brought with them industrial agricultural techniques. Land clearance and deep plowing of farmland increased sediment and nutrient loading into the bay, which continued to increase as the area continued to develop (Cooper & Brush 1991). The sediment record of the Chesapeake Bay shows a major increase in nutrient levels, suggesting limited availability of oxygen, starting between the 17th and 18th centuries (Cooper & Brush 1991; Zimmerman & Canuel, 2002). The overloading of nutrients into the bay only continued to increase throughout the modern era. Recently deposited sediments in the Chesapeake Bay have 2–3 times greater amounts of organic carbon, and 4–20 times greater amounts of nitrogen and phosphorus input into the bay than the pre-colonial era (Cornwell et al., 1996; Fisher et al., 2006). The nutrient runoff from land sources causes a huge increase in available nutrients in the water. Algae present in the bay take up those nutrients and rapidly reproduce in algal blooms. As algae sink to the bottom, they are decomposed, consuming oxygen (Long et al. 2014). In the Chesapeake Bay, seasonal stratification typically occurs between Spring and early Fall (Officer et al. 1984; Cerco & Noel 2007). More sunlight, higher temperatures, and less storms and winds during the summer cause strongly stratified water column, with a pycnocline typically 10 meters below the surface (Cerco & Noel; Seliger et al.). As oxygen is depleted from the decomposition of organic matter in bottom waters, but is not able to exchange gas with surface water due to a strong pycnocline, dissolved oxygen levels reach near zero by mid-June and can persist until October (Officer et al.). Organisms living in bottom waters may have some tolerance to hypoxia, but when events exceed their tolerance, ecologically and commercially important crabs, oysters, and mesoplankton become unhealthy and may experience die offs (Kirby & Miller; Officer; Kimmel). Scientific research Researchers work in the Chesapeake Bay to collect information about water quality, plant and animal abundances, shoreline erosion, tides, waves, and harmful algal blooms. For example, the Virginia Institute of Marine Science monitors the abundance of submerged aquatic vegetation in the shallow areas of the Chesapeake Bay each summer. Many organizations run continuous monitoring programs. Monitoring programs set out instruments at fixed stations on buoys, moorings, and docks throughout the bay to record things like temperature, salinity, chlorophyll-a concentration, dissolved oxygen, and turbidity over time. Organizations actively collecting data in the Chesapeake Bay include, but are not limited to: Chesapeake Bay National Estuarine Research Reserve in Maryland Chesapeake Bay National Estuarine Research Reserve in Virginia Chesapeake Bay Program Hampton Roads Sanitation District Maryland Department of Natural Resources NASA GSFC Ocean Biology group NASA GSFC Applied Sciences group (Water Resources and Human Health areas) NOAA Chesapeake Bay Office Old Dominion University's Earth and Ocean Sciences Department Smithsonian Environmental Research Center United States Geological Survey University of Maryland Center for Environmental Science Virginia Department of Environmental Quality Virginia Department of Health Virginia Institute of Marine Science Virginia Marine Resources Commission Underwater archaeology Underwater archaeology is a subfield of archaeology that focuses on the exploration of submerged archaeological sites in seas, rivers, and other bodies of water. In 1988, the Maryland Maritime Archeology Program (MMAP) was established with the goal to manage and explore the various underwater archaeological sites that line the Chesapeake Bay. This was in response to the National Abandoned Shipwreck Act passed in 1987, which gave ownership of historically significant shipwrecks to those states with proper management programs. Water makes up 25% of the State of Maryland and there are over 550 submerged archaeological sites that have been located across the Chesapeake Bay and its surrounding watersheds. Ranging from 12,000-year-old, precolonial native settlements to shipwrecks from as recent as World War II, the MMAP researches thousands of years' worth of history in these archaeological sites. Susan Langley has been Maryland's State Underwater Archaeologist, one of only nine state-appointed underwater archaeologists in the United States, since assuming the role in 1995. Before Langley was hired, only 1% of the underwater archaeological sites in the bay area had been examined. Over the next 10 years, Langley made significant improvements to the MMAP's marine technology, allowing her and her team to explore 34% of the underwater archaeological sites by 2004. Location and research processes The Chesapeake Bay watershed has been heavily impacted by natural forces such as erosion, tides, and a history of hurricanes and other storms. Along with environmental factors, the bay has been negatively impacted by humans since being settled in the 17th century, bringing with them problems like pollution, construction, destruction of the environment, and currently poultry farms. All of these circumstances have made it increasingly difficult for the MMAP to identify potential underwater archaeological sites. As sea levels rise and historically significant areas are sunk and covered in sediment, the MMAP relies on various pieces of equipment to locate these man-made anomalies but also ensure that the material being examined is kept intact. Using marine magnetometers (detects iron/absent space), side-scan sonar (detects objects on sea floor), along with precise global positioning systems, Langley and the MMAP have been much more successful in locating submerged archaeological sites. After locating the site, Langley and her team have a strict process in order to preserve the site and its contents, allowing more accurate and thorough research to be conducted. The remains of nearly every site have been submerged in saltwater for sometimes centuries, the integrity of shipwrecks and other materials are fragile and careful precaution must be used when working with them. Taking photos and videos, creating maps, and constructing models are all a part of the process of preserving remains. Susan Langley notes herself, "If you have only ten percent of a ship’s hull, you can reconstruct the ship. Construction techniques can tell us about the people who built the vessels, artifacts can tell us about the people who profited from the ship’s trade, and eco-facts—evidence of insect infestation and organic remains, such as seeds, that are preserved in anaerobic, muddy environments—can tell us about the climate and season when a ship sank." Still, the MMAP makes it a point to publish their data and information once a site is officially identified; however, the details of the location are left out to sway would-be looters, who have plagued marine archaeologists for decades. Significant sites Altogether there are more than 1,800 ship and boat wrecks that scatter the bottom of the Chesapeake Bay and its surrounding waterways. Dozens of precolonial era canoes and artifacts have been extracted from the bay, helping to portray a better picture of the lives of Native Americans (e.g., Powhatan, Pamunkey, Nansemond) In 1974, scallop fishermen dredged up the skull of a prehistoric mastodon, which through carbon dating was found to be 22,000 years old. Along with the skull, a carved blade was also discovered in the same area. Unable to accurately carbon date the stone tool, archaeologists looked at similar styles of blade carving in order to gauge when it was made. The technique was similar to the Solutrean tools that were crafted in Europe between 22,000 and 17,000 years ago and it was noted that the stone tool must be at least 14,000 years old. The Solutrean hypothesis challenges the previous theory regarding the first inhabitants of North America, whereas it is commonly accepted amongst anthropologists that the Clovis people were the first to settle the region somewhere around 13,000 years ago. There is some controversy surrounding these findings; many anthropologists have disputed this, claiming that the environment and setting make properly identifying the origins of these artifacts nearly impossible. The Chesapeake Bay Flotilla, which was constructed using shallow barges and ships to provide a blockade to the British during the War of 1812. After holding strong for some months, the British eventually dispersed the flotilla and dozens of these vessels were burnt and sunk. Starting in 1978, there were numerous expeditions launched in hopes of successfully discovering what was left of the Chesapeake Bay Flotilla. Since then, hundreds of artifacts and remains have been extracted from the submerged ships such as weapons, personal items, and many other objects. Underwater archaeologists have also been successful in constructing accurate models and maps of the wreckage amongst the sea floor. In October 1774, a British merchant ship arrived at the port of Annapolis loaded with tea disguised as linens and garments. The tea was hidden by the British to avoid conflict with the colonists as the recently imposed tea tax had created hostility and uncertainty among the colonies. Named Peggy Stewart, the British vessel arrived and attempted to tax the colonists for the purchased tea. The colonists refused to pay the tax and after a few days of public meetings, the colonists decided to burn Peggy Stewart and the contents of it. The British ship was sunk in what became known as the 'Annapolis Tea Party' and has since become an important site for underwater archaeologists in the Chesapeake Bay. In 1949, after the Nazi's defeat in World War 2, the United States seized a German U-1105 built with sonar-evading rubber sheathing for study purposes. It was sunk the same year in the Potomac River off of the Chesapeake Bay following a high explosives test hosted by the U.S. Navy and has since been a popular site for underwater archaeologists. Maryland has controlled the majority of underwater archaeology research around the Chesapeake Bay; however, Virginia's Department of Historic Resources has had a State Underwater Archaeologist since the 1970s. In 1982, the Virginia Department of Historic Resources along with the first State Underwater Archaeologist, John Broadwater, led an expedition to explore and research a sunken fleet of Revolution-era battleships. In September 1781, during the Revolutionary War, the British intentionally sunk more than a dozen ships in the York River, near the mouth of the Chesapeake Bay. Led by Lord Charles Cornwallis, a fleet of British ships was pushed back towards the rivers of the Chesapeake, in a desperate attempt to avoid surrendering, Cornwallis began burning and sinking his own vessels with the hopes of stalling the incoming French and American ships. Cornwallis was eventually forced to surrender on October 19 and the ships along with its contents were at the bottom of the York River. One of the British ships, called Betsy, has been explored more than any other and over 5,000 relics were removed from Betsy on their original expedition in 1982, including weapons, personal objects, and some valuable metals. Broadwater and his team were awarded a 20-page article in the magazine National Geographic for their findings. Virginia has recently been granted funding for further research of these sunken vessels and expeditions are currently underway with the goal to fully explore this destroyed fleet of British ships. Unfortunately, following the publicity of these sunken ships, many divers have taken it upon themselves to explore the wreckage for 'treasure'. Publications There are several magazines and publications that cover topics directly related to the Chesapeake Bay and life and tourism within the bay region: The Bay Journal provides environmental news for the Chesapeake Bay watershed region. Bay Weekly is the Chesapeake Bay region's independent newspaper. The Capital, a newspaper based in Annapolis, reports about news pertaining to the Western Shore of Maryland and the Annapolis area. Chesapeake Bay Magazine and PropTalk focus on powerboating in the bay, and SpinSheet focuses on sailing. What's Up Magazine is a free monthly publication with special issues focused on Annapolis and the Eastern Shore. Cultural depictions In literature Beautiful Swimmers: Watermen, Crabs and the Chesapeake Bay (1976), a Pulitzer Prize-winning non-fiction book by William W. Warner about the Chesapeake Bay, blue crabs, and watermen. Chesapeake (1978), a novel by author James A. Michener. Chesapeake Requiem: A Year with the Watermen of Vanishing Tangier Island (2018), by Earl Swift, a New York Times bestselling nonfiction book about the crabbing community of Chesapeake Bay. Dicey's Song (1983) and the rest of Cynthia Voigt's Tillerman series are set in Crisfield on the Chesapeake Bay.* John Barth wrote two novels featuring Chesapeake Bay Jacob Have I Loved (1980) by Katherine Paterson, winner of the 1981 Newbery Medal. This is a novel about the relationship between two sisters in a waterman family who grow up on an island in the bay. Patriot Games (1987), in which protagonist Jack Ryan lives on the fictional Peregrine Cliffs overlooking the Chesapeake Bay, and Without Remorse (1993), in which protagonist John Kelly (later known as John Clark when he goes to work for the CIA), lives on a boat and an island in the bay, both by Tom Clancy. Red Kayak (2004) by Priscilla Cummings portrays class conflict between waterman people and wealthy newcomers. Sabbatical: A Romance (1982) centered on a yacht race through the bay, and The Tidewater Tales (1987) detailed a married couple telling stories to each other as they cruise the bay, both novels by John Barth. The Oyster Wars of Chesapeake Bay (1997) by John Wennersten, on the Oyster Wars in the decades following the Civil War. In film The Bay, a 2012 found footage-style eco-horror movie about a pandemic due to deadly pollution from chicken factory farm run-off and mutant isopods and aquatic parasites able to infect humans. Expedition Chesapeake, A Journey of Discovery, a 2019 film starring Jeff Corwin created by The Whitaker Center for Science and the Arts. In TV In Chesapeake Shores, the O'Brien family lives in a small town in the bay, not far from Baltimore. In MeatEater by Steven Rinella, Season 8, Episode 3–4 "Ghosts of the Chesapeake" features the Chesapeake Bay eastern shore. Other media Singer and songwriter Tom Wisner recorded several albums, often about the Chesapeake Bay. The Boston Globe wrote that Wisner "always tried to capture the voice of the water and the sky, of the rocks and the trees, of the fish and the birds, of the gods of nature he believed still watched over it all." He was known as the "Bard of the Chesapeake Bay." The Chesapeake Bay is referenced in the hit musical Hamilton, in the song "Yorktown (The World Turned Upside Down)." It describes the famous Battle of Yorktown, the last battle in the Revolutionary War. When describing the US army's plan for attack, Hamilton sings: "When we finally drive the British away, Lafayette is there waiting in Chesapeake Bay!" See also Chesapeake Bay Interpretive Buoy System Chesapeake Bay Retriever Chesapeake Climate Action Network Chesepian Chessie (sea monster) Coastal and Estuarine Research Federation Great Ireland List of islands in Maryland (with the islands in the bay) National Estuarine Research Reserve Old Bay Seasoning Notes References Further reading Cleaves, E.T. et al. (2006). Quaternary geologic map of the Chesapeake Bay 4º x 6º quadrangle, United States [Miscellaneous Investigations Series; Map I-1420 (NJ-18)]. Reston, VA: U.S. Department of the Interior, U.S. Geological Survey. Crawford, S. 2012. Terrapin Bay Fishing. Chesapeake Bay Tides and Currents Meyers, Debra and Perrealt, Melanie (eds.) (2014). Order and Civility in the Early Modern Chesapeake. Lanham, MD: Rowman and Littlefield. Phillips, S.W., ed. (2007). Synthesis of U.S. Geological Survey science for the Chesapeake Bay ecosystem and implications for environmental management [U.S. Geological Survey Circular 1316]. Reston, VA: U.S. Department of the Interior, U.S. Geological Survey. Thomas, William G., III. "The Chesapeake Bay." Southern Spaces, April 16, 2004. William W. Warner, Beautiful Swimmers, about the history, ecology and anthropology of the Chesapeake Bay, published 1976 Cerco, C. F., Noel, M. R. (2007). Can oyster restoration reverse cultural eutrophication in Chesapeake Bay? Estuaries and Coasts, 30(2): 331–343. Cooper, S. R., Brush, G. S. (1991). Long-term history of Chesapeake Bay anoxia. Science, 254(5034): 992–996. Cornwell, J. C., Conley, D. J., Owens, M., Stevenson, J. C. (1996). A sediment chronology of the eutrophication of Chesapeake Bay. Estuaries, 19(2B): 486–499. Fisher, T. R., Hagy III, J. D., Boynton, W. R., Williams, M. R. (2006). Cultural eutrophication of the Choptank and Patuxent estuaries of Chesapeake Bay. Limnology and Oceanography, 51(1, part 2): 435–447. Kimmel, D, G., Boynton, W. R., Roman, M. R. (2012). Long-term decline in calanoid copepod Acartia tonsa in central Chesapeake Bay, USA: An indirect effect of eutrophication? Estuarine, Coastal, and Shelf Science, 101: 76–85. Kirby, M. X., Miller, H. M. (2005). Response of a benthic suspension feeder (Crassostrea virginica Gmelin) to three centuries of anthropogenic eutrophication in Chesapeake Bay. Estuarine, Coastal, and Shelf Science, 62:679–689. Long, W. C., Seitz, R. D., Brylawski, B. J., Lipicus, R. N. (2014). Individual, population, and ecosystem effects of hypoxia on a dominant benthic bivalve in Chesapeake Bay. Ecological Monographs, 84(2): 303–327. Officer, C. B., Biggs, R. B., Taft, J. L., Cronin, L. E., Tyler, M. A., Boynton, W. R. (1984). Chesapeake Bay anoxia: origin, development, and significance. Science, 223(4631): 22–27. Seliger, H. H., Boggs, J. A., Biggley, W. H. (1985). Catastrophic anoxia in the Chesapeake Bay in 1984. Science, 228(4695): 70–73. Sturdivant, S. K., Diaz, R., Llanso, R., Dauer, D. (2014). Relationship between hypoxia and macrobenthic production in Chesapeake Bay. Estuaries and Coasts. 37(5). DOI:10.1007/s12237-013-9763-4. Zimmerman, A. R., Canuel, E. A. (2002). Sediment geochemical records of eutrophication in the mesohaline Chesapekae Bay. Limnology and Oceanography, 47(4): 1084–1093. External links Chesapeake Bay History & Culture, U.S. National Park Service Chesapeake Bay Program University of Maryland Center for Environmental Science Research and science application activities emphasizing Chesapeake Bay and its watershed. Maryland Department of Natural Resources Eyes on the Bay Real-time and historical Chesapeake Bay water quality and satellite data. 01 Environment of the Mid-Atlantic states Estuaries of Maryland Estuaries of Virginia Intracoastal Waterway Marine ecoregions Ramsar sites in the United States Estuaries of the United States Eutrophication
Chesapeake Bay
[ "Chemistry", "Environmental_science" ]
12,141
[ "Eutrophication", "Environmental chemistry", "Water pollution" ]
59,497
https://en.wikipedia.org/wiki/Solubility
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property depends on many other variables, such as the physical form of the two substances and the manner and intensity of mixing. The concept and measure of solubility are extremely important in many sciences besides chemistry, such as geology, biology, physics, and oceanography, as well as in engineering, medicine, agriculture, and even in non-technical activities like painting, cleaning, cooking, and brewing. Most chemical reactions of scientific, industrial, or practical interest only happen after the reagents have been dissolved in a suitable solvent. Water is by far the most common such solvent. The term "soluble" is sometimes used for materials that can form colloidal suspensions of very fine solid particles in a liquid. The quantitative solubility of such substances is generally not well-defined, however. Quantification of solubility The solubility of a specific solute in a specific solvent is generally expressed as the concentration of a saturated solution of the two. Any of the several ways of expressing concentration of solutions can be used, such as the mass, volume, or amount in moles of the solute for a specific mass, volume, or mole amount of the solvent or of the solution. Per quantity of solvent In particular, chemical handbooks often express the solubility as grams of solute per 100 millilitres of solvent (g/(100 mL), often written as g/100 ml), or as grams of solute per decilitre of solvent (g/dL); or, less commonly, as grams of solute per litre of solvent (g/L). The quantity of solvent can instead be expressed in mass, as grams of solute per 100 grams of solvent (g/(100 g), often written as g/100 g), or as grams of solute per kilogram of solvent (g/kg). The number may be expressed as a percentage in this case, and the abbreviation "w/w" may be used to indicate "weight per weight". (The values in g/L and g/kg are similar for water, but that may not be the case for other solvents.) Alternatively, the solubility of a solute can be expressed in moles instead of mass. For example, if the quantity of solvent is given in kilograms, the value is the molality of the solution (mol/kg). Per quantity of solution The solubility of a substance in a liquid may also be expressed as the quantity of solute per quantity of solution, rather than of solvent. For example, following the common practice in titration, it may be expressed as moles of solute per litre of solution (mol/L), the molarity of the latter. In more specialized contexts the solubility may be given by the mole fraction (moles of solute per total moles of solute plus solvent) or by the mass fraction at equilibrium (mass of solute per mass of solute plus solvent). Both are dimensionless numbers between 0 and 1 which may be expressed as percentages (%). Liquid and gaseous solutes For solutions of liquids or gases in liquids, the quantities of both substances may be given volume rather than mass or mole amount; such as litre of solute per litre of solvent, or litre of solute per litre of solution. The value may be given as a percentage, and the abbreviation "v/v" for "volume per volume" may be used to indicate this choice. Conversion of solubility values Conversion between these various ways of measuring solubility may not be trivial, since it may require knowing the density of the solution — which is often not measured, and cannot be predicted. While the total mass is conserved by dissolution, the final volume may be different from both the volume of the solvent and the sum of the two volumes. Moreover, many solids (such as acids and salts) will dissociate in non-trivial ways when dissolved; conversely, the solvent may form coordination complexes with the molecules or ions of the solute. In those cases, the sum of the moles of molecules of solute and solvent is not really the total moles of independent particles solution. To sidestep that problem, the solubility per mole of solution is usually computed and quoted as if the solute does not dissociate or form complexes—that is, by pretending that the mole amount of solution is the sum of the mole amounts of the two substances. Qualifiers used to describe extent of solubility The extent of solubility ranges widely, from infinitely soluble (without limit, i.e. miscible) such as ethanol in water, to essentially insoluble, such as titanium dioxide in water. A number of other descriptive terms are also used to qualify the extent of solubility for a given application. For example, U.S. Pharmacopoeia gives the following terms, according to the mass msv of solvent required to dissolve one unit of mass msu of solute: (The solubilities of the examples are approximate, for water at 20–25 °C.) The thresholds to describe something as insoluble, or similar terms, may depend on the application. For example, one source states that substances are described as "insoluble" when their solubility is less than 0.1 g per 100 mL of solvent. Molecular view Solubility occurs under dynamic equilibrium, which means that solubility results from the simultaneous and opposing processes of dissolution and phase joining (e.g. precipitation of solids). A stable state of the solubility equilibrium occurs when the rates of dissolution and re-joining are equal, meaning the relative amounts of dissolved and non-dissolved materials are equal. If the solvent is removed, all of the substance that had dissolved is recovered. The term solubility is also used in some fields where the solute is altered by solvolysis. For example, many metals and their oxides are said to be "soluble in hydrochloric acid", although in fact the aqueous acid irreversibly degrades the solid to give soluble products. Most ionic solids dissociate when dissolved in polar solvents. In those cases where the solute is not recovered upon evaporation of the solvent, the process is referred to as solvolysis. The thermodynamic concept of solubility does not apply straightforwardly to solvolysis. When a solute dissolves, it may form several species in the solution. For example, an aqueous solution of cobalt(II) chloride can afford , each of which interconverts. Factors affecting solubility Solubility is defined for specific phases. For example, the solubility of aragonite and calcite in water are expected to differ, even though they are both polymorphs of calcium carbonate and have the same chemical formula. The solubility of one substance in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility. Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of solutions. The last two effects can be quantified using the equation for solubility equilibrium. For a solid that dissolves in a redox reaction, solubility is expected to depend on the potential (within the range of potentials under which the solid remains the thermodynamically stable phase). For example, solubility of gold in high-temperature water is observed to be almost an order of magnitude higher (i.e. about ten times higher) when the redox potential is controlled using a highly oxidizing Fe3O4-Fe2O3 redox buffer than with a moderately oxidizing Ni-NiO buffer. Solubility (metastable, at concentrations approaching saturation) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific surface area or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal. The last two effects, although often difficult to measure, are of practical importance. For example, they provide the driving force for precipitate aging (the crystal size spontaneously increasing with time). Temperature The solubility of a given solute in a given solvent is function of temperature. Depending on the change in enthalpy (ΔH) of the dissolution reaction, i.e., on the endothermic (ΔH > 0) or exothermic (ΔH < 0) character of the dissolution reaction, the solubility of a given compound may increase or decrease with temperature. The van 't Hoff equation relates the change of solubility equilibrium constant (Ksp) to temperature change and to reaction enthalpy change. For most solids and liquids, their solubility increases with temperature because their dissolution reaction is endothermic (ΔH > 0). In liquid water at high temperatures, (e.g. that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent and in a change of hydration energy affecting the ΔG of the dissolution reaction. Gaseous solutes exhibit more complex behavior with temperature. As the temperature is raised, gases usually become less soluble in water (exothermic dissolution reaction related to their hydration) (to a minimum, which is below 120 °C for most permanent gases), but more soluble in organic solvents (endothermic dissolution reaction related to their solvation). The chart shows solubility curves for some typical solid inorganic salts in liquid water (temperature is in degrees Celsius, i.e. kelvins minus 273.15). Many salts behave like barium nitrate and disodium hydrogen arsenate, and show a large increase in solubility with temperature (ΔH > 0). Some solutes (e.g. sodium chloride in water) exhibit solubility that is fairly independent of temperature (ΔH ≈ 0). A few, such as calcium sulfate (gypsum) and cerium(III) sulfate, become less soluble in water as temperature increases (ΔH < 0). This is also the case for calcium hydroxide (portlandite), whose solubility at 70 °C is about half of its value at 25 °C. The dissolution of calcium hydroxide in water is also an exothermic process (ΔH < 0). As dictated by the van 't Hoff equation and Le Chatelier's principle, low temperatures favor dissolution of Ca(OH)2. Portlandite solubility increases at low temperature. This temperature dependence is sometimes referred to as "retrograde" or "inverse" solubility. Occasionally, a more complex pattern is observed, as with sodium sulfate, where the less soluble decahydrate crystal (mirabilite) loses water of crystallization at 32 °C to form a more soluble anhydrous phase (thenardite) with a smaller change in Gibbs free energy (ΔG) in the dissolution reaction. The solubility of organic compounds nearly always increases with temperature. The technique of recrystallization, used for purification of solids, depends on a solute's different solubilities in hot and cold solvent. A few exceptions exist, such as certain cyclodextrins. Pressure For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as: where the index iterates the components, is the mole fraction of the -th component in the solution, is the pressure, the index refers to constant temperature, is the partial molar volume of the -th component in the solution, is the partial molar volume of the -th component in the dissolving solid, and is the universal gas constant. The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time. Solubility of gases Henry's law is used to quantify the solubility of gases in solvents. The solubility of a gas in a solvent is directly proportional to the partial pressure of that gas above the solvent. This relationship is similar to Raoult's law and can be written as: where is a temperature-dependent constant (for example, 769.2 L·atm/mol for dioxygen (O2) in water at 298 K), is the partial pressure (in atm), and is the concentration of the dissolved gas in the liquid (in mol/L). The solubility of gases is sometimes also quantified using Bunsen solubility coefficient. In the presence of small bubbles, the solubility of the gas does not depend on the bubble radius in any other way than through the effect of the radius on pressure (i.e. the solubility of gas in the liquid in contact with small bubbles is increased due to pressure increase by Δp = 2γ/r; see Young–Laplace equation). Henry's law is valid for gases that do not undergo change of chemical speciation on dissolution. Sieverts' law shows a case when this assumption does not hold. The carbon dioxide solubility in seawater is also affected by temperature, pH of the solution, and by the carbonate buffer. The decrease of solubility of carbon dioxide in seawater when temperature increases is also an important retroaction factor (positive feedback) exacerbating past and future climate changes as observed in ice cores from the Vostok site in Antarctica. At the geological time scale, because of the Milankovich cycles, when the astronomical parameters of the Earth orbit and its rotation axis progressively change and modify the solar irradiance at the Earth surface, temperature starts to increase. When a deglaciation period is initiated, the progressive warming of the oceans releases CO2 into the atmosphere because of its lower solubility in warmer sea water. In turn, higher levels of CO2 in the atmosphere increase the greenhouse effect and carbon dioxide acts as an amplifier of the general warming. Polarity A popular aphorism used for predicting solubility is "like dissolves like" also expressed in the Latin language as "Similia similibus solventur". This statement indicates that a solute will dissolve best in a solvent that has a similar chemical structure to itself, based on favorable entropy of mixing. This view is simplistic, but it is a useful rule of thumb. The overall solvation capacity of a solvent depends primarily on its polarity. For example, a very polar (hydrophilic) solute such as urea is very soluble in highly polar water, less soluble in fairly polar methanol, and practically insoluble in non-polar solvents such as benzene. In contrast, a non-polar or lipophilic solute such as naphthalene is insoluble in water, fairly soluble in methanol, and highly soluble in non-polar benzene. In even more simple terms a simple ionic compound (with positive and negative ions) such as sodium chloride (common salt) is easily soluble in a highly polar solvent (with some separation of positive (δ+) and negative (δ-) charges in the covalent molecule) such as water, as thus the sea is salty as it accumulates dissolved salts since early geological ages. The solubility is favored by entropy of mixing (ΔS) and depends on enthalpy of dissolution (ΔH) and the hydrophobic effect. The free energy of dissolution (Gibbs energy) depends on temperature and is given by the relationship: ΔG = ΔH – TΔS. Smaller ΔG means greater solubility. Chemists often exploit differences in solubilities to separate and purify compounds from reaction mixtures, using the technique of liquid-liquid extraction. This applies in vast areas of chemistry from drug synthesis to spent nuclear fuel reprocessing. Rate of dissolution Dissolution is not an instantaneous process. The rate of solubilization (in kg/s) is related to the solubility product and the surface area of the material. The speed at which a solid dissolves may depend on its crystallinity or lack thereof in the case of amorphous solids and the surface area (crystallite size) and the presence of polymorphism. Many practical systems illustrate this effect, for example in designing methods for controlled drug delivery. In some cases, solubility equilibria can take a long time to establish (hours, days, months, or many years; depending on the nature of the solute and other factors). The rate of dissolution can be often expressed by the Noyes–Whitney equation or the Nernst and Brunner equation of the form: where: = mass of dissolved material = time = surface area of the interface between the dissolving substance and the solvent = diffusion coefficient = thickness of the boundary layer of the solvent at the surface of the dissolving substance = mass concentration of the substance on the surface = mass concentration of the substance in the bulk of the solvent For dissolution limited by diffusion (or mass transfer if mixing is present), is equal to the solubility of the substance. When the dissolution rate of a pure substance is normalized to the surface area of the solid (which usually changes with time during the dissolution process), then it is expressed in kg/m2s and referred to as "intrinsic dissolution rate". The intrinsic dissolution rate is defined by the United States Pharmacopeia. Dissolution rates vary by orders of magnitude between different systems. Typically, very low dissolution rates parallel low solubilities, and substances with high solubilities exhibit high dissolution rates, as suggested by the Noyes-Whitney equation. Theories of solubility Solubility product Solubility constants are used to describe saturated solutions of ionic compounds of relatively low solubility (see solubility equilibrium). The solubility constant is a special case of an equilibrium constant. Since it is a product of ion concentrations in equilibrium, it is also known as the solubility product. It describes the balance between dissolved ions from the salt and undissolved salt. The solubility constant is also "applicable" (i.e. useful) to precipitation, the reverse of the dissolving reaction. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. While the solubility constant is not as simple as solubility, the value of this constant is generally independent of the presence of other species in the solvent. Other theories The Flory–Huggins solution theory is a theoretical model describing the solubility of polymers. The Hansen solubility parameters and the Hildebrand solubility parameters are empirical methods for the prediction of solubility. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion. The octanol-water partition coefficient, usually expressed as its logarithm (Log P), is a measure of differential solubility of a compound in a hydrophobic solvent (1-octanol) and a hydrophilic solvent (water). The logarithm of these two values enables compounds to be ranked in terms of hydrophilicity (or hydrophobicity). The energy change associated with dissolving is usually given per mole of solute as the enthalpy of solution. Applications Solubility is of fundamental importance in a large number of scientific disciplines and practical applications, ranging from ore processing and nuclear reprocessing to the use of medicines, and the transport of pollutants. Solubility is often said to be one of the "characteristic properties of a substance", which means that solubility is commonly used to describe the substance, to indicate a substance's polarity, to help to distinguish it from other substances, and as a guide to applications of the substance. For example, indigo is described as "insoluble in water, alcohol, or ether but soluble in chloroform, nitrobenzene, or concentrated sulfuric acid". Solubility of a substance is useful when separating mixtures. For example, a mixture of salt (sodium chloride) and silica may be separated by dissolving the salt in water, and filtering off the undissolved silica. The synthesis of chemical compounds, by the milligram in a laboratory, or by the ton in industry, both make use of the relative solubilities of the desired product, as well as unreacted starting materials, byproducts, and side products to achieve separation. Another example of this is the synthesis of benzoic acid from phenylmagnesium bromide and dry ice. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl ether, and when shaken with this organic solvent in a separatory funnel, will preferentially dissolve in the organic layer. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. This process, known as liquid–liquid extraction, is an important technique in synthetic chemistry. Recycling is used to ensure maximum extraction. Differential solubility In flowing systems, differences in solubility often determine the dissolution-precipitation driven transport of species. This happens when different parts of the system experience different conditions. Even slightly different conditions can result in significant effects, given sufficient time. For example, relatively low solubility compounds are found to be soluble in more extreme environments, resulting in geochemical and geological effects of the activity of hydrothermal fluids in the Earth's crust. These are often the source of high quality economic mineral deposits and precious or semi-precious gems. In the same way, compounds with low solubility will dissolve over extended time (geological time), resulting in significant effects such as extensive cave systems or Karstic land surfaces. Solubility of ionic compounds in water Some ionic compounds (salts) dissolve in water, which arises because of the attraction between positive and negative charges (see: solvation). For example, the salt's positive ions (e.g. Ag+) attract the partially negative oxygen atom in . Likewise, the salt's negative ions (e.g. Cl−) attract the partially positive hydrogens in . Note: the oxygen atom is partially negative because it is more electronegative than hydrogen, and vice versa (see: chemical polarity). However, there is a limit to how much salt can be dissolved in a given volume of water. This concentration is the solubility and related to the solubility product, Ksp. This equilibrium constant depends on the type of salt ( vs. , for example), temperature, and the common ion effect. One can calculate the amount of that will dissolve in 1 liter of pure water as follows: Ksp = [Ag+] × [Cl−] / M2 (definition of solubility product; M = mol/L) Ksp = 1.8 × 10−10 (from a table of solubility products) [Ag+] = [Cl−], in the absence of other silver or chloride salts, so [Ag+]2 = 1.8 × 10−10 M2 [Ag+] = 1.34 × 10−5 mol/L The result: 1 liter of water can dissolve 1.34 × 10−5 moles of at room temperature. Compared with other salts, is poorly soluble in water. For instance, table salt () has a much higher Ksp = 36 and is, therefore, more soluble. The following table gives an overview of solubility rules for various ionic compounds. Solubility of organic compounds The principle outlined above under polarity, that like dissolves like, is the usual guide to solubility with organic systems. For example, petroleum jelly will dissolve in gasoline because both petroleum jelly and gasoline are non-polar hydrocarbons. It will not, on the other hand, dissolve in ethyl alcohol or water, since the polarity of these solvents is too high. Sugar will not dissolve in gasoline, since sugar is too polar in comparison with gasoline. A mixture of gasoline and sugar can therefore be separated by filtration or extraction with water. Solid solution This term is often used in the field of metallurgy to refer to the extent that an alloying element will dissolve into the base metal without forming a separate phase. The solvus or solubility line (or curve) is the line (or lines) on a phase diagram that give the limits of solute addition. That is, the lines show the maximum amount of a component that can be added to another component and still be in solid solution. In the solid's crystalline structure, the 'solute' element can either take the place of the matrix within the lattice (a substitutional position; for example, chromium in iron) or take a place in a space between the lattice points (an interstitial position; for example, carbon in iron). In microelectronic fabrication, solid solubility refers to the maximum concentration of impurities one can place into the substrate. In solid compounds (as opposed to elements), the solubility of a solute element can also depend on the phases separating out in equilibrium. For example, amount of Sn soluble in the ZnSb phase can depend significantly on whether the phases separating out in equilibrium are (Zn4Sb3+Sn(L)) or (ZnSnSb2+Sn(L)). Besides these, the ZnSb compound with Sn as a solute can separate out into other combinations of phases after the solubility limit is reached depending on the initial chemical composition during synthesis. Each combination produces a different solubility of Sn in ZnSb. Hence solubility studies in compounds, concluded upon the first instance of observing secondary phases separating out might underestimate solubility. While the maximum number of phases separating out at once in equilibrium can be determined by the Gibb's phase rule, for chemical compounds there is no limit on the number of such phase separating combinations itself. Hence, establishing the "maximum solubility" in solid compounds experimentally can be difficult, requiring equilibration of many samples. If the dominant crystallographic defect (mostly interstitial or substitutional point defects) involved in the solid-solution can be chemically intuited beforehand, then using some simple thermodynamic guidelines can considerably reduce the number of samples required to establish maximum solubility. Incongruent dissolution Many substances dissolve congruently (i.e. the composition of the solid and the dissolved solute stoichiometrically match). However, some substances may dissolve incongruently, whereby the composition of the solute in solution does not match that of the solid. This solubilization is accompanied by alteration of the "primary solid" and possibly formation of a secondary solid phase. However, in general, some primary solid also remains and a complex solubility equilibrium establishes. For example, dissolution of albite may result in formation of gibbsite. . In this case, the solubility of albite is expected to depend on the solid-to-solvent ratio. This kind of solubility is of great importance in geology, where it results in formation of metamorphic rocks. In principle, both congruent and incongruent dissolution can lead to the formation of secondary solid phases in equilibrium. So, in the field of Materials Science, the solubility for both cases is described more generally on chemical composition phase diagrams. Solubility prediction Solubility is a property of interest in many aspects of science, including but not limited to: environmental predictions, biochemistry, pharmacy, drug-design, agrochemical design, and protein ligand binding. Aqueous solubility is of fundamental interest owing to the vital biological and transportation functions played by water. In addition, to this clear scientific interest in water solubility and solvent effects; accurate predictions of solubility are important industrially. The ability to accurately predict a molecule's solubility represents potentially large financial savings in many chemical product development processes, such as pharmaceuticals. In the pharmaceutical industry, solubility predictions form part of the early stage lead optimisation process of drug candidates. Solubility remains a concern all the way to formulation. A number of methods have been applied to such predictions including quantitative structure–activity relationships (QSAR), quantitative structure–property relationships (QSPR) and data mining. These models provide efficient predictions of solubility and represent the current standard. The draw back such models is that they can lack physical insight. A method founded in physical theory, capable of achieving similar levels of accuracy at an sensible cost, would be a powerful tool scientifically and industrially. Methods founded in physical theory tend to use thermodynamic cycles, a concept from classical thermodynamics. The two common thermodynamic cycles used involve either the calculation of the free energy of sublimation (solid to gas without going through a liquid state) and the free energy of solvating a gaseous molecule (gas to solution), or the free energy of fusion (solid to a molten phase) and the free energy of mixing (molten to solution). These two process are represented in the following diagrams. These cycles have been used for attempts at first principles predictions (solving using the fundamental physical equations) using physically motivated solvent models, to create parametric equations and QSPR models and combinations of the two. The use of these cycles enables the calculation of the solvation free energy indirectly via either gas (in the sublimation cycle) or a melt (fusion cycle). This is helpful as calculating the free energy of solvation directly is extremely difficult. The free energy of solvation can be converted to a solubility value using various formulae, the most general case being shown below, where the numerator is the free energy of solvation, R is the gas constant and T is the temperature in kelvins. Well known fitted equations for solubility prediction are the general solubility equations. These equations stem from the work of Yalkowsky et al. The original formula is given first, followed by a revised formula which takes a different assumption of complete miscibility in octanol. These equations are founded on the principles of the fusion cycle. See also Notes References External links Chemical properties Physical properties Solutions Underwater diving physics
Solubility
[ "Physics", "Chemistry" ]
6,979
[ "Physical phenomena", "Applied and interdisciplinary physics", "Underwater diving physics", "Homogeneous chemical mixtures", "nan", "Solutions", "Physical properties" ]
59,503
https://en.wikipedia.org/wiki/Bioaccumulation
Bioaccumulation is the gradual accumulation of substances, such as pesticides or other chemicals, in an organism. Bioaccumulation occurs when an organism absorbs a substance faster than it can be lost or eliminated by catabolism and excretion. Thus, the longer the biological half-life of a toxic substance, the greater the risk of chronic poisoning, even if environmental levels of the toxin are not very high. Bioaccumulation, for example in fish, can be predicted by models. Hypothesis for molecular size cutoff criteria for use as bioaccumulation potential indicators are not supported by data. Biotransformation can strongly modify bioaccumulation of chemicals in an organism. Toxicity induced by metals is associated with bioaccumulation and biomagnification. Storage or uptake of a metal faster than it is metabolized and excreted leads to the accumulation of that metal. The presence of various chemicals and harmful substances in the environment can be analyzed and assessed with a proper knowledge on bioaccumulation helping with chemical control and usage. An organism can take up chemicals by breathing, absorbing through skin or swallowing. When the concentration of a chemical is higher within the organism compared to its surroundings (air or water), it is referred to as bioconcentration. Biomagnification is another process related to bioaccumulation as the concentration of the chemical or metal increases as it moves up from one trophic level to another. Naturally, the process of bioaccumulation is necessary for an organism to grow and develop; however, the accumulation of harmful substances can also occur. Examples Terrestrial examples An example of poisoning in the workplace can be seen from the phrase "mad as a hatter" (18th and 19th century England). Mercury was used in stiffening the felt that was used to make hats. This forms organic species such as methylmercury, which is lipid-soluble (fat-soluble), and tends to accumulate in the brain, resulting in mercury poisoning. Other lipid-soluble poisons include tetraethyllead compounds (the lead in leaded petrol), and DDT. These compounds are stored in the body fat, and when the fatty tissues are used for energy, the compounds are released and cause acute poisoning. Strontium-90, part of the fallout from atomic bombs, is chemically similar enough to calcium that it is taken up in forming bones, where its radiation can cause damage for a long time. Some animal species use bioaccumulation as a mode of defense: by consuming toxic plants or animal prey, an animal may accumulate the toxin, which then presents a deterrent to a potential predator. One example is the tobacco hornworm, which concentrates nicotine to a toxic level in its body as it consumes tobacco plants. Poisoning of small consumers can be passed along the food chain to affect the consumers later in the chain. Other compounds that are not normally considered toxic can be accumulated to toxic levels in organisms. The classic example is vitamin A, which becomes concentrated in livers of carnivores, e.g. polar bears: as a pure carnivore that feeds on other carnivores (seals), they accumulate extremely large amounts of vitamin A in their livers. It was known by the native peoples of the Arctic that the livers of carnivores should not be eaten, but Arctic explorers have suffered hypervitaminosis A from eating the livers of bears; and there has been at least one example of similar poisoning of Antarctic explorers eating husky dog livers. One notable example of this is the expedition of Sir Douglas Mawson, whose exploration companion died from eating the liver of one of their dogs. Aquatic examples Coastal fish (such as the smooth toadfish) and seabirds (such as the Atlantic puffin) are often monitored for heavy metal bioaccumulation. Methylmercury gets into freshwater systems through industrial emissions and rain. As its concentration increases up the food web, it can reach dangerous levels for both fish and the humans who rely on fish as a food source. Fish are typically assessed for bioaccumulation when they have been exposed to chemicals that are in their aqueous phases. Commonly tested fish species include the common carp, rainbow trout, and bluegill sunfish. Generally, fish are exposed to bioconcentration and bioaccumulation of organic chemicals in the environment through lipid layer uptake of water-borne chemicals. In other cases, the fish are exposed through ingestion/digestion of substances or organisms in the aquatic environment which contain the harmful chemicals. Naturally produced toxins can also bioaccumulate. The marine algal blooms known as "red tides" can result in local filter-feeding organisms such as mussels and oysters becoming toxic; coral reef fish can be responsible for the poisoning known as ciguatera when they accumulate a toxin called ciguatoxin from reef algae. In some eutrophic aquatic systems, biodilution can occur. This is a decrease in a contaminant with an increase in trophic level, due to higher concentrations of algae and bacteria diluting the concentration of the pollutant. Wetland acidification can raise the chemical or metal concentrations, which leads to an increased bioavailability in marine plants and freshwater biota. Plants situated there which includes both rooted and submerged plants can be influenced by the bioavailability of metals. Studies of turtles as model species Bioaccumulation in turtles occurs when synthetic organic contaminants (i.e., PFAS), heavy metals, or high levels of trace elements enter a singular organism, potentially affecting their health. Although there are ongoing studies of bioaccumulation in turtles, factors like pollution, climate change, and shifting landscape can affect the amounts of these toxins in the ecosystem. The most common elements studied in turtles are mercury, cadmium, argon, and selenium. Heavy metals are released into rivers, streams, lakes, oceans, and other aquatic environments, and the plants that live in these environments will absorb the metals. Since the levels of trace elements are high in aquatic ecosystems, turtles will naturally consume various trace elements throughout various aquatic environments by eating plants and sediments. Once these substances enter the bloodstream and muscle tissue, they will increase in concentration and will become toxic to the turtles, perhaps causing metabolic, endocrine system, and reproductive failure. Some marine turtles are used as experimental subjects to analyze bioaccumulation because of their shoreline habitats, which facilitate the collection of blood samples and other data. The turtle species are very diverse and contribute greatly to biodiversity, so many researchers find it valuable to collect data from various species. Freshwater turtles are another model species for investigating bioaccumulation. Due to their relatively limited home-range freshwater turtles can be associated with a particular catchment and its chemical contaminant profile. Developmental effects of turtles Toxic concentrations in turtle eggs may damage the developmental process of the turtle. For example, in the Australian freshwater short-neck turtle (Emydura macquarii macquarii), environmental PFAS concentrations were bioaccumulated by the mother and then offloaded into their eggs that impacted developmental metabolic processes and fat stores. Furthermore, there is evidence PFAS impacted the gut microbiome in exposed turtles. In terms of toxic levels of heavy metals, it was observed to decrease egg-hatching rates in the Amazon River turtle, Podocnemis expansa. In this particular turtle egg, the heavy metals reduce the fat in the eggs and change how water is filtered throughout the embryo; this can affect the survival rate of the turtle egg. See also Biomagnification (magnification of toxins with increasing trophic level) Chelation therapy Drug accumulation ratio Environmental impact of pesticides International POPs Elimination Network Persistent organic pollutants Phytoremediation (removal of pollutants by bioaccumulation in plants) References External links Bioaccumulation & Biomagnification Biomagnification graphic Biomagnification Definition Page Criteria used by the PBT Profiler Bioaccumulation & Biotransformation Biodegradable waste management Biodegradation Ecotoxicology Food chains Pollution Species
Bioaccumulation
[ "Chemistry" ]
1,691
[ "Biodegradation", "Biodegradable waste management" ]
59,521
https://en.wikipedia.org/wiki/Preadditive%20category
In mathematics, specifically in category theory, a preadditive category is another name for an Ab-category, i.e., a category that is enriched over the category of abelian groups, Ab. That is, an Ab-category C is a category such that every hom-set Hom(A,B) in C has the structure of an abelian group, and composition of morphisms is bilinear, in the sense that composition of morphisms distributes over the group operation. In formulas: and where + is the group operation. Some authors have used the term additive category for preadditive categories, but this page reserves this term for certain special preadditive categories (see below). Examples The most obvious example of a preadditive category is the category Ab itself. More precisely, Ab is a closed monoidal category. Note that commutativity is crucial here; it ensures that the sum of two group homomorphisms is again a homomorphism. In contrast, the category of all groups is not closed. See Medial category. Other common examples: The category of (left) modules over a ring R, in particular: the category of vector spaces over a field K. The algebra of matrices over a ring, thought of as a category as described in the article Additive category. Any ring, thought of as a category with only one object, is a preadditive category. Here composition of morphisms is just ring multiplication and the unique hom-set is the underlying abelian group. These will give you an idea of what to think of; for more examples, follow the links to below. Elementary properties Because every hom-set Hom(A,B) is an abelian group, it has a zero element 0. This is the zero morphism from A to B. Because composition of morphisms is bilinear, the composition of a zero morphism and any other morphism (on either side) must be another zero morphism. If you think of composition as analogous to multiplication, then this says that multiplication by zero always results in a product of zero, which is a familiar intuition. Extending this analogy, the fact that composition is bilinear in general becomes the distributivity of multiplication over addition. Focusing on a single object A in a preadditive category, these facts say that the endomorphism hom-set Hom(A,A) is a ring, if we define multiplication in the ring to be composition. This ring is the endomorphism ring of A. Conversely, every ring (with identity) is the endomorphism ring of some object in some preadditive category. Indeed, given a ring R, we can define a preadditive category R to have a single object A, let Hom(A,A) be R, and let composition be ring multiplication. Since R is an abelian group and multiplication in a ring is bilinear (distributive), this makes R a preadditive category. Category theorists will often think of the ring R and the category R as two different representations of the same thing, so that a particularly perverse category theorist might define a ring as a preadditive category with exactly one object (in the same way that a monoid can be viewed as a category with only one object—and forgetting the additive structure of the ring gives us a monoid). In this way, preadditive categories can be seen as a generalisation of rings. Many concepts from ring theory, such as ideals, Jacobson radicals, and factor rings can be generalized in a straightforward manner to this setting. When attempting to write down these generalizations, one should think of the morphisms in the preadditive category as the "elements" of the "generalized ring". Additive functors If and are preadditive categories, then a functor is additive if it too is enriched over the category . That is, is additive if and only if, given any objects and of , the function is a group homomorphism. Most functors studied between preadditive categories are additive. For a simple example, if the rings and are represented by the one-object preadditive categories and , then a ring homomorphism from to is represented by an additive functor from to , and conversely. If and are categories and is preadditive, then the functor category is also preadditive, because natural transformations can be added in a natural way. If is preadditive too, then the category of additive functors and all natural transformations between them is also preadditive. The latter example leads to a generalization of modules over rings: If is a preadditive category, then is called the module category over . When is the one-object preadditive category corresponding to the ring , this reduces to the ordinary category of (left) -modules. Again, virtually all concepts from the theory of modules can be generalised to this setting. -linear categories More generally, one can consider a category enriched over the monoidal category of modules over a commutative ring , called an -linear category. In other words, each hom-set in has the structure of an -module, and composition of morphisms is -bilinear. When considering functors between two -linear categories, one often restricts to those that are -linear, so those that induce -linear maps on each hom-set. Biproducts Any finite product in a preadditive category must also be a coproduct, and conversely. In fact, finite products and coproducts in preadditive categories can be characterised by the following biproduct condition: The object B is a biproduct of the objects A1, ..., An if and only if there are projection morphisms pj: B → Aj and injection morphisms ij: Aj → B, such that (i1∘p1) + ··· + (in∘pn) is the identity morphism of B, pj∘ij is the identity morphism of Aj, and pj∘ik is the zero morphism from Ak to Aj whenever j and k are distinct. This biproduct is often written A1 ⊕ ··· ⊕ An, borrowing the notation for the direct sum. This is because the biproduct in well known preadditive categories like Ab is the direct sum. However, although infinite direct sums make sense in some categories, like Ab, infinite biproducts do not make sense (see ). The biproduct condition in the case n = 0 simplifies drastically; B is a nullary biproduct if and only if the identity morphism of B is the zero morphism from B to itself, or equivalently if the hom-set Hom(B,B) is the trivial ring. Note that because a nullary biproduct will be both terminal (a nullary product) and initial (a nullary coproduct), it will in fact be a zero object. Indeed, the term "zero object" originated in the study of preadditive categories like Ab, where the zero object is the zero group. A preadditive category in which every biproduct exists (including a zero object) is called additive. Further facts about biproducts that are mainly useful in the context of additive categories may be found under that subject. Kernels and cokernels Because the hom-sets in a preadditive category have zero morphisms, the notion of kernel and cokernel make sense. That is, if f: A → B is a morphism in a preadditive category, then the kernel of f is the equaliser of f and the zero morphism from A to B, while the cokernel of f is the coequaliser of f and this zero morphism. Unlike with products and coproducts, the kernel and cokernel of f are generally not equal in a preadditive category. When specializing to the preadditive categories of abelian groups or modules over a ring, this notion of kernel coincides with the ordinary notion of a kernel of a homomorphism, if one identifies the ordinary kernel K of f: A → B with its embedding K → A. However, in a general preadditive category there may exist morphisms without kernels and/or cokernels. There is a convenient relationship between the kernel and cokernel and the abelian group structure on the hom-sets. Given parallel morphisms f and g, the equaliser of f and g is just the kernel of g − f, if either exists, and the analogous fact is true for coequalisers. The alternative term "difference kernel" for binary equalisers derives from this fact. A preadditive category in which all biproducts, kernels, and cokernels exist is called pre-abelian. Further facts about kernels and cokernels in preadditive categories that are mainly useful in the context of pre-abelian categories may be found under that subject. Special cases Most of these special cases of preadditive categories have all been mentioned above, but they're gathered here for reference. A ring is a preadditive category with exactly one object. An additive category is a preadditive category with all finite biproducts. A pre-abelian category is an additive category with all kernels and cokernels. An abelian category is a pre-abelian category such that every monomorphism and epimorphism is normal. The preadditive categories most commonly studied are in fact abelian categories; for example, Ab is an abelian category. References Nicolae Popescu; 1973; Abelian Categories with Applications to Rings and Modules; Academic Press, Inc.; out of print Charles Weibel; 1994; An introduction to homological algebra; Cambridge Univ. Press Additive categories
Preadditive category
[ "Mathematics" ]
2,113
[ "Mathematical structures", "Category theory", "Additive categories" ]
59,524
https://en.wikipedia.org/wiki/Next-Generation%20Secure%20Computing%20Base
The Next-Generation Secure Computing Base (NGSCB; codenamed Palladium and also known as Trusted Windows) is a software architecture designed by Microsoft which claimed to provide users of the Windows operating system with better privacy, security, and system integrity. NGSCB was the result of years of research and development within Microsoft to create a secure computing solution that equaled the security of closed platforms such as set-top boxes while simultaneously preserving the backward compatibility, flexibility, and openness of the Windows operating system. Microsoft's primary stated objective with NGSCB was to "protect software from software." Part of the Trustworthy Computing initiative when unveiled in 2002, NGSCB was to be integrated with Windows Vista, then known as "Longhorn." NGSCB relied on hardware designed by the Trusted Computing Group to produce a parallel operation environment hosted by a new hypervisor (referred to as a sort of kernel in documentation) called the "Nexus" that existed alongside Windows and provided new applications with features such as hardware-based process isolation, data encryption based on integrity measurements, authentication of a local or remote machine or software configuration, and encrypted paths for user authentication and graphics output. NGSCB would facilitate the creation and distribution of digital rights management (DRM) policies pertaining the use of information. NGSCB was subject to much controversy during its development, with critics contending that it would impose restrictions on users, enforce vendor lock-in, and undermine fair use rights and open-source software. It was first demonstrated by Microsoft at WinHEC 2003 before undergoing a revision in 2004 that would enable earlier applications to benefit from its functionality. Reports indicated in 2005 that Microsoft would change its plans with NGSCB so that it could ship Windows Vista by its self-imposed deadline year, 2006; instead, Microsoft would ship only part of the architecture, BitLocker, which can optionally use the Trusted Platform Module to validate the integrity of boot and system files prior to operating system startup. Development of NGSCB spanned approximately a decade before its cancellation, the lengthiest development period of a major feature intended for Windows Vista. NGSCB differed from technologies Microsoft billed as "pillars of Windows Vista"—Windows Presentation Foundation, Windows Communication Foundation, and WinFS—during its development in that it was not built with the .NET Framework and did not focus on managed code software development. NGSCB has yet to fully materialize; however, aspects of it are available in features such as BitLocker of Windows Vista, Measured Boot and UEFI of Windows 8, Certificate Attestation of Windows 8.1, Device Guard of Windows 10. and Device Encryption in Windows 11 Home editions, with TPM 2.0 mandatory for installation. History Early development Development of NGSCB began in 1997 after Peter Biddle conceived of new ways to protect content on personal computers. Biddle enlisted assistance from members from the Microsoft Research division and other core contributors eventually included Blair Dillaway, Brian LaMacchia, Bryan Willman, Butler Lampson, John DeTreville, John Manferdelli, Marcus Peinado, and Paul England. Adam Barr, a former Microsoft employee who worked to secure the remote boot feature during development of Windows 2000 was approached by Biddle and colleagues during his tenure with an initiative tentatively known as "Trusted Windows," which aimed to protect DVD content from being copied. To this end, Lampson proposed the use of a hypervisor to execute a limited operating system dedicated to DVD playback alongside Windows 2000. Patents for a DRM operating system were later filed in 1999 by England, DeTreville and Lampson; Lampson noted that these patents were for NGSCB. Biddle and colleagues realized by 1999 that NGSCB was more applicable to privacy and security than content protection, and the project was formally given the green-light by Microsoft in October, 2001. During WinHEC 1999, Biddle discussed intent to create a "trusted" architecture for Windows to leverage new hardware to promote confidence and security while preserving backward compatibility with previous software. On October 11, 1999, the Trusted Computing Platform Alliance, a consortium of various technology companies including Compaq, Hewlett-Packard, IBM, Intel, and Microsoft was formed in an effort to promote personal computing confidence and security. The TCPA released detailed specifications for a trusted computing platform with focus on features such as code validation and encryption based on integrity measurements, hardware-based key storage, and machine authentication; these features required a new hardware component designed by the TCPA called the "Trusted Platform Module" (referred to as a "Security Support Component", "Security CoProcessor", or "Security Support Processor" in early NGSCB documentation). At WinHEC 2000, Microsoft released a technical presentation on the topics of protection of privacy, security, and intellectual property titled "Privacy, Security, and Content in Windows Platforms", which focused on turning Windows into a "platform of trust" for computer security, user content, and user privacy. Notable in the presentation is the contention that "there is no difference between privacy protection, computer security, and content protection"—"assurances of trust must be universally true". Microsoft reiterated these claims at WinHEC 2001. NGSCB intended to protect all forms of content, unlike traditional rights management schemes which focus only on the protection of audio tracks or movies instead of users they have the potential to protect which made it, in Biddle's words, "egalitarian". As "Palladium" Microsoft held its first design review for the NGSCB in April 2002, with approximately 37 companies under a non-disclosure agreement. NGSCB was publicly unveiled under its codename "Palladium" in a June 2002 article by Steven Levy for Newsweek that focused on its design, feature set, and origin. Levy briefly described potential features: access control, authentication, authorization, DRM, encryption, as well as protection from junk mail and malware, with example policies being email accessible only to an intended recipient and Microsoft Word documents readable for only a week after their creation; Microsoft later release a guide clarifying these assertions as being hyperbolic; namely, that NGSCB would not intrinsically enforce content protection, or protect against junk mail or malware. Instead, it would provide a platform on which developers could build new solutions that did not exist by isolating applications and store secrets for them. Microsoft was not sure whether to "expose the feature in the Control Panel or present it as a separate utility," but NGSCB would be an opt-in solution—disabled by default. Microsoft PressPass later interviewed John Manferdelli, who restated and expanded on many of the key points discussed in the article by Newsweek. Manferdelli described it as evolutionary platform for Windows in July, articulating how "'Palladium' will not require DRM, and DRM will not require 'Palladium'. Microsoft sought a group program manager in August to assist in leading the development of several Microsoft technologies including NGSCB. Paul Otellini announced Intel's support for NGSCB with a set of chipset, platform, and processor codenamed "LaGrande" at Intel Developer Forum 2002, which would provide an NGSCB hardware foundation and preserve backward compatibility with previous software. As NGSCB NGSCB was known as "Palladium" until January 24, 2003 when Microsoft announced it had been renamed as "Next-Generation Secure Computing Base." Project manager Mario Juarez stated this name was chosen to avoid legal action from an unnamed company which had acquired the rights to the "Palladium" name, as well as to reflect Microsoft's commitment to NGSCB in the upcoming decade. Juarez acknowledged the previous name was controversial, but denied it was changed by Microsoft to dodge criticism. The Trusted Computing Platform Alliance was superseded by the Trusted Computing Group in April 2003. A principal goal of the new consortium was to produce a Trusted Platform Module (TPM) specification compatible with NGSCB; the previous specification, TPM 1.1 did not meet its requirements. TPM 1.2 was designed for compliance with NGSCB and introduced many features for such platforms. The first TPM 1.2 specification, Revision 62 was released in 2003. Biddle emphasized in June 2003 that hardware vendors and software developers were vital to NGSCB. Microsoft publicly demonstrated NGSCB for the first time at WinHEC 2003, where it protected data in memory from an attacker; prevented access to—and alerted the user of—an application that had been changed; and prevented a remote administration tool from capturing an instant messaging conversation. Despite Microsoft's desire to demonstrate NGSCB on hardware, software emulation was required for as few hardware components were available. Biddle reiterated that NGSCB was a set of evolutionary enhancements to Windows, basing this assessment on preserved backward compatibility and employed concepts in use before its development, but said the capabilities and scenarios it would enable would be revolutionary. Microsoft also revealed its multi-year roadmap for NGSCB, with the next major development milestone scheduled for the Professional Developers Conference, indicating that subsequent versions would ship concurrently with pre-release builds of Windows Vista; however, news reports suggested that NGSCB would not be integrated with Windows Vista when release, but it would instead be made available as separate software for the operating system. Microsoft also announced details related to adoption and deployment of NGSCB at WinHEC 2003, stating that it would create a new value proposition for customers without significantly increasing the cost of computers; NGSCB adoption during the year of its introductory release was not anticipated and immediate support for servers was not expected. On the last day of the conference, Biddle said NGSCB needed to provide users with a way to differentiate between secured and unsecured windows—that a secure window should be "noticeably different" to help protect users from spoofing attacks; Nvidia was the earliest to announce this feature. WinHEC 2003 represented an important development milestone for NGSCB. Microsoft dedicated several hours to presentations and released many technical whitepapers, and companies including Atmel, Comodo Group, Fujitsu, and SafeNet produced preliminary hardware for the demonstration. Microsoft also demonstrated NGSCB at several U.S. campuses in California and in New York in June 2003. NGSCB was among the topics discussed during Microsoft's PDC 2003 with a pre-beta software development kit, known as the Developer Preview, being distributed to attendees. The Developer Preview was the first time that Microsoft made NGSCB code available to the developer community and was offered by the company as an educational opportunity for NGSCB software development. With this release, Microsoft stated that it was primarily focused on supporting business and enterprise applications and scenarios with the first version of the NGSCB scheduled to ship with Windows Vista, adding that it intended to address consumers with a subsequent version of the technology, but did not provide an estimated time of delivery for this version. At the conference, Jim Allchin said that Microsoft was continuing to work with hardware vendors so that they would be able to support the technology, and Bill Gates expected a new generation of central processing units (CPUs) to offer full support. Following PDC 2003, NGSCB was demonstrated again on prototype hardware during the annual RSA Security conference in November. Microsoft announced at WinHEC 2004 that it would revise NSCB in response to feedback from customers and independent software vendors who did not desire to rewrite their existing programs in order to benefit from its functionality; the revision would also provide more direct support for Windows with protected environments for the operating system, its components, and applications, instead of it being an environment to itself and new applications. The NGSCB secure input feature would also undergo a significant revision based on cost assessments, hardware requirements, and usability issues of the previous implementation. There were subsequent reports that Microsoft would cease developing NGSCB; Microsoft denied these reports and reaffirmed its commitment to delivery. Additional reports published later that year suggested that Microsoft would make even additional changes based on feedback from the industry. Microsoft's absence of continual updates on NGSCB progress in 2005 had caused industry insiders to speculate that NGSCB had been cancelled. At the Microsoft Management Summit event, Steve Ballmer said that the company would build on the security foundation it had started with the NGSCB to create a new set of virtualization technologies for Windows, which were later Hyper-V. Reports during WinHEC 2005 indicated Microsoft scaled back its plans for NGSCB, so that it could to ship Windows Vista—which had already been beset by numerous delays and even a "development reset"—within a reasonable timeframe; instead of isolating components, NGSCB would offer "Secure Startup" ("BitLocker Drive Encryption") to encrypt disk volumes and validate both pre-boot firmware and operating system components. Microsoft intended to deliver other aspects of NGSCB later. Jim Allchin stated NGSCB would "marry hardware and software to gain better security", which was instrumental in the development of BitLocker. Architecture and technical details A complete Microsoft-based Trusted Computing-enabled system will consist not only of software components developed by Microsoft but also of hardware components developed by the Trusted Computing Group. The majority of features introduced by NGSCB are heavily reliant on specialized hardware and so will not operate on PCs predating 2004. In current Trusted Computing specifications, there are two hardware components: the Trusted Platform Module (TPM), which will provide secure storage of cryptographic keys and a secure cryptographic co-processor, and a curtained memory feature in the CPU. In NGSCB, there are two software components, the Nexus, a security kernel that is part of the Operating System that provides a secure environment (Nexus mode) for trusted code to run in, and Nexus Computing Agents (NCAs), trusted modules which run in Nexus mode within NGSCB-enabled applications. Secure storage and attestation At the time of manufacture, a cryptographic key is generated and stored within the TPM. This key is never transmitted to any other component, and the TPM is designed in such a way that it is extremely difficult to retrieve the stored key by reverse engineering or any other method, even to the owner. Applications can pass data encrypted with this key to be decrypted by the TPM, but the TPM will only do so under certain strict conditions. Specifically, decrypted data will only ever be passed to authenticated, trusted applications, and will only ever be stored in curtained memory, making it inaccessible to other applications and the Operating System. Although the TPM can only store a single cryptographic key securely, secure storage of arbitrary data is by extension possible by encrypting the data such that it may only be decrypted using the securely stored key. The TPM is also able to produce a cryptographic signature based on its hidden key. This signature may be verified by the user or by any third party, and so can therefore be used to provide remote attestation that the computer is in a secure state. Curtained memory NGSCB also relies on a curtained memory feature provided by the CPU. Data within curtained memory can only be accessed by the application to which it belongs, and not by any other application or the Operating System. The attestation features of the TPM can be used to confirm to a trusted application that it is genuinely running in curtained memory; it is therefore very difficult for anyone, including the owner, to trick a trusted application into running outside of curtained memory. This in turn makes reverse engineering of a trusted application extremely difficult. Applications NGSCB-enabled applications are to be split into two distinct parts, the NCA, a trusted module with access to a limited Application Programming Interface (API), and an untrusted portion, which has access to the full Windows API. Any code which deals with NGSCB functions must be located within the NCA. The reason for this split is that the Windows API has developed over many years and is as a result extremely complex and difficult to audit for security bugs. To maximize security, trusted code is required to use a smaller, carefully audited API. Where security is not paramount, the full API is available. Uses and scenarios NGSCB enables new categories of applications and scenarios. Examples of uses cited by Microsoft include decentralized access control policies; digital rights management services for consumers, content providers, and enterprises; protected instant messaging conversations and online transactions; and more secure forms of machine health compliance, network authentication, and remote access. NGSCB-secured virtual private network access was one of the earliest scenarios envisaged by Microsoft. NGSCB can also strengthen software update mechanisms such as those belonging to antivirus software or Windows Update. An early NGSCB privacy scenario conceived of by Microsoft is the "wine purchase scenario," where a user can safely conduct a transaction with an online merchant without divulging personally identifiable information during the transaction. With the release of the NGSCB Developer Preview during PDC 2003, Microsoft emphasized the following enterprise applications and scenarios: document signing, secured data viewing, secured instant messaging, and secured plug-ins for emailing. WinHEC 2004 scenarios During WinHEC 2004, Microsoft revealed two features based on its revision of NGSCB, Cornerstone and Code Integrity Rooting: Cornerstone would protect a user's login and authentication information by securely transmitting it to NGSCB-protected Windows components for validation, finalizing the user authentication process by releasing access to the SYSKEY if validation was successful. It was intended to protect data on laptops that had been lost or stolen to prevent hackers or thieves from accessing it even if they had performed a software-based attack or booted into an alternative operating system. Code Integrity Rooting would validate boot and system files prior to the startup of Microsoft Windows. If validation of these components failed, the SYSKEY would not be released. BitLocker is the combination of these features; "Cornerstone" was the codename of BitLocker, and BitLocker validates pre-boot firmware and operating system components before boot, which protects SYSKEY from unauthorized access; an unsuccessful validation prohibits access to a protected system. Reception Reaction to NGSCB after its unveiling by Newsweek was largely negative. While its security features were praised, critics contended that NGSCB could be used to impose restrictions on users; lock-out competing software vendors; and undermine fair use rights and open source software such as Linux. Microsoft's characterization of NGSCB as a security technology was subject to criticism as its origin focused on DRM. NGSCB's announcement occurred only a few years after Microsoft was accused of anti-competitive practices during the United States v. Microsoft Corporation antitrust case, a detail which called the company's intentions for the technology into question—NGSCB was regarded as an effort by the company to maintain its dominance in the personal computing industry. The notion of a "Trusted Windows" architecture—one that implied Windows itself was untrustworthy—would also be a source of contention within the company itself. After NGSCB's unveiling, Microsoft drew frequent comparisons to Big Brother, an oppressive dictator of a totalitarian state in George Orwell's dystopian novel Nineteen Eighty-Four. The Electronic Privacy Information Center legislative counsel, Chris Hoofnagle, described Microsoft's characterization of the NGSCB as "Orwellian." Big Brother Awards bestowed Microsoft with an award because of NGSCB. Bill Gates addressed these comments at a homeland security conference by stating that NGSCB "can make our country more secure and prevent the nightmare vision of George Orwell at the same time." Steven Levy—the author who unveiled the existence of the NGSCB—claimed in a 2004 front-page article for Newsweek that NGSCB could eventually lead to an "information infrastructure that encourages censorship, surveillance, and suppression of the creative impulse where anonymity is outlawed and every penny spent is accounted for." However, Microsoft outlined a scenario enabled by NGSCB that allows a user to conduct a transaction without divulging personally identifiable information. Ross Anderson of Cambridge University was among the most vocal critics of NGSCB and of Trusted Computing. Anderson alleged that the technologies were designed to satisfy federal agency requirements; enable content providers and other third-parties to remotely monitor or delete data in users' machines; use certificate revocation lists to ensure that only content deemed "legitimate" could be copied; and use unique identifiers to revoke or validate files; he compared this to the attempts by the Soviet Union to "register and control all typewriters and fax machines." Anderson also claimed that the TPM could control the execution of applications on a user's machine and, because of this, bestowed to it a derisive "Fritz Chip" name in reference to United States Senator Ernest "Fritz" Hollings, who had recently proposed DRM legislation such as the Consumer Broadband and Digital Television Promotion Act for consumer electronic devices. Anderson's report was referenced extensively in the news media and appeared in publications such as BBC News, The New York Times, and The Register. David Safford of IBM Research stated that Anderson presented several technical errors within his report, namely that the proposed capabilities did not exist within any specification and that many were beyond the scope of trusted platform design. Anderson later alleged that BitLocker was designed to facilitate DRM and to lock out competing software on an encrypted system, and, in spite of his allegation that NGSCB was designed for federal agencies, advocated for Microsoft to add a backdoor to BitLocker. Similar sentiments were expressed by Richard Stallman, founder of the GNU Project and Free Software Foundation, who alleged that Trusted Computing technologies were designed to enforce DRM and to prevent users from running unlicensed software. In 2015, Stallman stated that "the TPM has proved a total failure" for DRM and that "there are reasons to think that it will not be feasible to use them for DRM." After the release of Anderson's report, Microsoft stated in an NGSCB FAQ that "enhancements to Windows under the NGSCB architecture have no mechanism for filtering content, nor do they provide a mechanism for proactively searching the Internet for 'illegal' content [...] Microsoft is firmly opposed to putting 'policing functions' into nexus-aware PCs and does not intend to do so" and that the idea was in direct opposition with the design goals set forth for NGSCB, which was "built on the premise that no policy will be imposed that is not approved by the user." Concerns about the NGSCB TPM were also raised in that it would use what are essentially unique machine identifiers, which drew comparisons to the Intel Pentium III processor serial number, a unique hardware identification number of the 1990s viewed as a risk to end-user privacy. NGSCB, however, mandates that disclosure or use of the keys provided by the TPM be based solely on user discretion; in contrast, Intel's Pentium III included a unique serial number that could potentially be revealed to any application. NGSCB, also unlike Intel's Pentium III, would provide optional features to allow users to indirectly identify themselves to external requestors. In response to concerns that NGSCB would take control away from users for the sake of content providers, Bill Gates stated that the latter should "provide their content in easily accessible forms or else it ends up encouraging piracy." Bryan Willman, Marcus Peinado, Paul England, and Peter Biddle—four NGSCB engineers—realized early during the development of NGSCB that DRM would ultimately fail in its efforts to prevent piracy. In 2002, the group released a paper titled "The Darknet and the Future of Content Distribution" that outlined how content protection mechanisms are demonstrably futile. The paper's premise circulated within Microsoft during the late 1990s and was a source of controversy within Microsoft; Biddle stated that the company almost terminated his employment as a result of the paper's release. A 2003 report published by Harvard University researchers suggested that NGSCB and similar technologies could facilitate the secure distribution of copyrighted content across peer-to-peer networks. Not all assessments were negative. Paul Thurrott praised NGSCB, stating that it was "Microsoft's Trustworthy Computing initiative made real" and that it would "form the basis of next-generation computer systems." Scott Bekker of Redmond Magazine stated that NGSCB was misunderstood because of its controversy and that it appeared to be a "promising, user-controlled defense against privacy intrusions and security violations." In February 2004, In-Stat/MDR, publisher of the Microprocessor Report, bestowed NGSCB with its Best Technology award. Malcom Crompton, Australian Privacy Commissioner, stated that "NGSCB has great privacy enhancing potential [...] Microsoft has recognised there is a privacy issue [...] we should all work with them, give them the benefit of the doubt and urge them to do the right thing." When Microsoft announced at WinHEC 2004 that it would be revising NGSCB so that previous applications would not have to be rewritten, Martin Reynolds of Gartner praised the company for this decision as it would create a "more sophisticated" version of NGSCB that would simplify development. David Wilson, writing for South China Morning Post, defended NGSCB by saying that "attacking the latest Microsoft monster is an international blood sport" and that "even if Microsoft had a new technology capable of ending Third World hunger and First World obesity, digital seers would still lambaste it because they view Bill Gates as a grey incarnation of Satan." Microsoft noted that negative reaction to NGSCB gradually waned after events such as the USENIX Annual Technical Conference in 2003, and several Fortune 500 companies also expressed interest in it. When reports announced in 2005 that Microsoft would scale back its plans and incorporate only BitLocker with Windows Vista, concerns pertaining digital rights management, erosion of user rights, and vendor lock-in remained. In 2008, Biddle stated that negative perception was the most significant contributing factor responsible for the cessation of NGSCB's development. Vulnerability In a 2003 article, Dan Boneh and David Brumley indicated that projects like NGSCB may be vulnerable to timing attacks. See also Microsoft Pluton Secure Boot Trusted Execution Technology Trusted Computing Trusted Platform Module Intel Management Engine References External links Microsoft's NGSCB home page (Archived on 2006-07-05) Trusted Computing Group home page System Integrity Team blog — team blog for NGSCB technologies (Archived on 2008-10-21) Security WMI Providers Reference on MSDN, including BitLocker Drive Encryption and Trusted Platform Module (both components of NGSCB) TPM Base Services on MSDN Development Considerations for Nexus Computing Agents Cryptographic software Discontinued Windows components Disk encryption Microsoft criticisms and controversies Microsoft initiatives Microsoft Windows security technology Trusted computing Windows Vista
Next-Generation Secure Computing Base
[ "Mathematics", "Engineering" ]
5,595
[ "Cybersecurity engineering", "Cryptographic software", "Trusted computing", "Mathematical software" ]
59,529
https://en.wikipedia.org/wiki/Solubility%20equilibrium
Solubility equilibrium is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation, or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent solubility product which functions like an equilibrium constant. Solubility equilibria are important in pharmaceutical, environmental and many other scenarios. Definitions A solubility equilibrium exists when a chemical compound in the solid state is in chemical equilibrium with a solution containing the compound. This type of equilibrium is an example of dynamic equilibrium in that some individual molecules migrate between the solid and solution phases such that the rates of dissolution and precipitation are equal to one another. When equilibrium is established and the solid has not all dissolved, the solution is said to be saturated. The concentration of the solute in a saturated solution is known as the solubility. Units of solubility may be molar (mol dm−3) or expressed as mass per unit volume, such as μg mL−1. Solubility is temperature dependent. A solution containing a higher concentration of solute than the solubility is said to be supersaturated. A supersaturated solution may be induced to come to equilibrium by the addition of a "seed" which may be a tiny crystal of the solute, or a tiny solid particle, which initiates precipitation. There are three main types of solubility equilibria. Simple dissolution. Dissolution with dissociation reaction. This is characteristic of salts. The equilibrium constant is known in this case as a solubility product. Dissolution with ionization reaction. This is characteristic of the dissolution of weak acids or weak bases in aqueous media of varying pH. In each case an equilibrium constant can be specified as a quotient of activities. This equilibrium constant is dimensionless as activity is a dimensionless quantity. However, use of activities is very inconvenient, so the equilibrium constant is usually divided by the quotient of activity coefficients, to become a quotient of concentrations. See Equilibrium chemistry#Equilibrium constant for details. Moreover, the activity of a solid is, by definition, equal to 1 so it is omitted from the defining expression. For a chemical equilibrium the solubility product, Ksp for the compound ApBq is defined as follows where [A] and [B] are the concentrations of A and B in a saturated solution. A solubility product has a similar functionality to an equilibrium constant though formally Ksp has the dimension of (concentration)p+q. Effects of conditions Temperature effect Solubility is sensitive to changes in temperature. For example, sugar is more soluble in hot water than cool water. It occurs because solubility products, like other types of equilibrium constants, are functions of temperature. In accordance with Le Chatelier's Principle, when the dissolution process is endothermic (heat is absorbed), solubility increases with rising temperature. This effect is the basis for the process of recrystallization, which can be used to purify a chemical compound. When dissolution is exothermic (heat is released) solubility decreases with rising temperature. Sodium sulfate shows increasing solubility with temperature below about 32.4 °C, but a decreasing solubility at higher temperature. This is because the solid phase is the decahydrate () below the transition temperature, but a different hydrate above that temperature. The dependence on temperature of solubility for an ideal solution (achieved for low solubility substances) is given by the following expression containing the enthalpy of melting, ΔmH, and the mole fraction of the solute at saturation: where is the partial molar enthalpy of the solute at infinite dilution and the enthalpy per mole of the pure crystal. This differential expression for a non-electrolyte can be integrated on a temperature interval to give: For nonideal solutions activity of the solute at saturation appears instead of mole fraction solubility in the derivative with respect to temperature: Common-ion effect The common-ion effect is the effect of decreased solubility of one salt when another salt that has an ion in common with it is also present. For example, the solubility of silver chloride, AgCl, is lowered when sodium chloride, a source of the common ion chloride, is added to a suspension of AgCl in water. The solubility, S, in the absence of a common ion can be calculated as follows. The concentrations [Ag+] and [Cl−] are equal because one mole of AgCl would dissociate into one mole of Ag+ and one mole of Cl−. Let the concentration of [Ag+(aq)] be denoted by x. Then Ksp for AgCl is equal to at 25 °C, so the solubility is . Now suppose that sodium chloride is also present, at a concentration of 0.01 mol dm−3 = 0.01 M. The solubility, ignoring any possible effect of the sodium ions, is now calculated by This is a quadratic equation in x, which is also equal to the solubility. In the case of silver chloride, x2 is very much smaller than 0.01 M x, so the first term can be ignored. Therefore a considerable reduction from . In gravimetric analysis for silver, the reduction in solubility due to the common ion effect is used to ensure "complete" precipitation of AgCl. Particle size effect The thermodynamic solubility constant is defined for large monocrystals. Solubility will increase with decreasing size of solute particle (or droplet) because of the additional surface energy. This effect is generally small unless particles become very small, typically smaller than 1 μm. The effect of the particle size on solubility constant can be quantified as follows: where *KA is the solubility constant for the solute particles with the molar surface area A, *KA→0 is the solubility constant for substance with molar surface area tending to zero (i.e., when the particles are large), γ is the surface tension of the solute particle in the solvent, Am is the molar surface area of the solute (in m2/mol), R is the universal gas constant, and T is the absolute temperature. Salt effects The salt effects (salting in and salting-out) refers to the fact that the presence of a salt which has no ion in common with the solute, has an effect on the ionic strength of the solution and hence on activity coefficients, so that the equilibrium constant, expressed as a concentration quotient, changes. Phase effect Equilibria are defined for specific crystal phases. Therefore, the solubility product is expected to be different depending on the phase of the solid. For example, aragonite and calcite will have different solubility products even though they have both the same chemical identity (calcium carbonate). Under any given conditions one phase will be thermodynamically more stable than the other; therefore, this phase will form when thermodynamic equilibrium is established. However, kinetic factors may favor the formation the unfavorable precipitate (e.g. aragonite), which is then said to be in a metastable state. In pharmacology, the metastable state is sometimes referred to as amorphous state. Amorphous drugs have higher solubility than their crystalline counterparts due to the absence of long-distance interactions inherent in crystal lattice. Thus, it takes less energy to solvate the molecules in amorphous phase. The effect of amorphous phase on solubility is widely used to make drugs more soluble. Pressure effect For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as: where is the mole fraction of the -th component in the solution, is the pressure, is the absolute temperature, is the partial molar volume of the th component in the solution, is the partial molar volume of the th component in the dissolving solid, and is the universal gas constant. The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time. Quantitative aspects Simple dissolution Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms. For example, when sucrose (table sugar) forms a saturated solution An equilibrium expression for this reaction can be written, as for any chemical reaction (products over reactants): where Ko is called the thermodynamic solubility constant. The braces indicate activity. The activity of a pure solid is, by definition, unity. Therefore The activity of a substance, A, in solution can be expressed as the product of the concentration, [A], and an activity coefficient, γ. When Ko is divided by γ, the solubility constant, Ks, is obtained. This is equivalent to defining the standard state as the saturated solution so that the activity coefficient is equal to one. The solubility constant is a true constant only if the activity coefficient is not affected by the presence of any other solutes that may be present. The unit of the solubility constant is the same as the unit of the concentration of the solute. For sucrose Ks = 1.971 mol dm−3 at 25 °C. This shows that the solubility of sucrose at 25 °C is nearly 2 mol dm−3 (540 g/L). Sucrose is unusual in that it does not easily form a supersaturated solution at higher concentrations, as do most other carbohydrates. Dissolution with dissociation Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for silver chloride: AgCl_{(s)} <=> Ag^+_{(aq)}{} + Cl^-_{(aq)} The expression for the equilibrium constant for this reaction is: where is the thermodynamic equilibrium constant and braces indicate activity. The activity of a pure solid is, by definition, equal to one. When the solubility of the salt is very low the activity coefficients of the ions in solution are nearly equal to one. By setting them to be actually equal to one this expression reduces to the solubility product expression: For 2:2 and 3:3 salts, such as CaSO4 and FePO4, the general expression for the solubility product is the same as for a 1:1 electrolyte (electrical charges are omitted in general expressions, for simplicity of notation) With an unsymmetrical salt like Ca(OH)2 the solubility expression is given by Since the concentration of hydroxide ions is twice the concentration of calcium ions this reduces to In general, with the chemical equilibrium and the following table, showing the relationship between the solubility of a compound and the value of its solubility product, can be derived. {| class="wikitable" !Salt ||p||q||Solubility, S |- !AgClCa(SO4)Fe(PO4) | 1|| 1|| |- !Na2(SO4)Ca(OH)2 | 21|| 12|| |- !Na3(PO4)FeCl3 |31|| 13 || |- !Al2(SO4)3Ca3(PO4)2 |23||32|| |- !Mp(An)q |p |q | |} Solubility products are often expressed in logarithmic form. Thus, for calcium sulfate, with , . The smaller the value of Ksp, or the more negative the log value, the lower the solubility. Some salts are not fully dissociated in solution. Examples include MgSO4, famously discovered by Manfred Eigen to be present in seawater as both an inner sphere complex and an outer sphere complex. The solubility of such salts is calculated by the method outlined in dissolution with reaction. Hydroxides The solubility product for the hydroxide of a metal ion, Mn+, is usually defined, as follows: However, general-purpose computer programs are designed to use hydrogen ion concentrations with the alternative definitions. For hydroxides, solubility products are often given in a modified form, K*sp, using hydrogen ion concentration in place of hydroxide ion concentration. The two values are related by the self-ionization constant for water, Kw. For example, at ambient temperature, for calcium hydroxide, Ca(OH)2, lg Ksp is ca. −5 and lg K*sp ≈ −5 + 2 × 14 ≈ 23. Dissolution with reaction A typical reaction with dissolution involves a weak base, B, dissolving in an acidic aqueous solution. This reaction is very important for pharmaceutical products. Dissolution of weak acids in alkaline media is similarly important. The uncharged molecule usually has lower solubility than the ionic form, so solubility depends on pH and the acid dissociation constant of the solute. The term "intrinsic solubility" is used to describe the solubility of the un-ionized form in the absence of acid or alkali. Leaching of aluminium salts from rocks and soil by acid rain is another example of dissolution with reaction: alumino-silicates are bases which react with the acid to form soluble species, such as Al3+(aq). Formation of a chemical complex may also change solubility. A well-known example is the addition of a concentrated solution of ammonia to a suspension of silver chloride, in which dissolution is favoured by the formation of an ammine complex. When sufficient ammonia is added to a suspension of silver chloride, the solid dissolves. The addition of water softeners to washing powders to inhibit the formation of soap scum provides an example of practical importance. Experimental determination The determination of solubility is fraught with difficulties. First and foremost is the difficulty in establishing that the system is in equilibrium at the chosen temperature. This is because both precipitation and dissolution reactions may be extremely slow. If the process is very slow solvent evaporation may be an issue. Supersaturation may occur. With very insoluble substances, the concentrations in solution are very low and difficult to determine. The methods used fall broadly into two categories, static and dynamic. Static methods In static methods a mixture is brought to equilibrium and the concentration of a species in the solution phase is determined by chemical analysis. This usually requires separation of the solid and solution phases. In order to do this the equilibration and separation should be performed in a thermostatted room. Very low concentrations can be measured if a radioactive tracer is incorporated in the solid phase. A variation of the static method is to add a solution of the substance in a non-aqueous solvent, such as dimethyl sulfoxide, to an aqueous buffer mixture. Immediate precipitation may occur giving a cloudy mixture. The solubility measured for such a mixture is known as "kinetic solubility". The cloudiness is due to the fact that the precipitate particles are very small resulting in Tyndall scattering. In fact the particles are so small that the particle size effect comes into play and kinetic solubility is often greater than equilibrium solubility. Over time the cloudiness will disappear as the size of the crystallites increases, and eventually equilibrium will be reached in a process known as precipitate ageing. Dynamic methods Solubility values of organic acids, bases, and ampholytes of pharmaceutical interest may be obtained by a process called "Chasing equilibrium solubility". In this procedure, a quantity of substance is first dissolved at a pH where it exists predominantly in its ionized form and then a precipitate of the neutral (un-ionized) species is formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution is monitored and strong acid and base titrant are added to adjust the pH to discover the equilibrium conditions when the two rates are equal. The advantage of this method is that it is relatively fast as the quantity of precipitate formed is quite small. However, the performance of the method may be affected by the formation supersaturated solutions. See also Solubility table: A table of solubilities of mostly inorganic salts at temperatures between 0 and 100 °C. Solvent models References External links Section 6.9: Solubilities of ionic salts. Includes a discussion of the thermodynamics of dissolution. IUPAC–NIST solubility database Solubility products of simple inorganic compounds Solvent activity along a saturation line and solubility Solubility challenge: Predict solubilities from a data base of 100 molecules. The database, of mostly compounds of pharmaceutical interest, is available at One hundred molecules with solubilities (Text file, tab separated). A number of computer programs are available to do the calculations. They include: CHEMEQL: A comprehensive computer program for the calculation of thermodynamic equilibrium concentrations of species in homogeneous and heterogeneous systems. Many geochemical applications. JESS: All types of chemical equilibria can be modelled including protonation, complex formation, redox, solubility and adsorption interactions. Includes an extensive database. MINEQL+: A chemical equilibrium modeling system for aqueous systems. Handles a wide range of pH, redox, solubility and sorption scenarios. PHREEQC: USGS software designed to perform a wide variety of low-temperature aqueous geochemical calculations, including reactive transport in one dimension. MINTEQ: A chemical equilibrium model for the calculation of metal speciation, solubility equilibria etc. for natural waters. WinSGW: A Windows version of the SOLGASWATER computer program. Equilibrium chemistry Solutions
Solubility equilibrium
[ "Chemistry" ]
3,830
[ "Equilibrium chemistry", "Solutions", "Homogeneous chemical mixtures" ]
59,538
https://en.wikipedia.org/wiki/Monomorphism
In the context of abstract algebra or universal algebra, a monomorphism is an injective homomorphism. A monomorphism from to is often denoted with the notation . In the more general setting of category theory, a monomorphism (also called a monic morphism or a mono) is a left-cancellative morphism. That is, an arrow such that for all objects and all morphisms , Monomorphisms are a categorical generalization of injective functions (also called "one-to-one functions"); in some categories the notions coincide, but monomorphisms are more general, as in the examples below. In the setting of posets intersections are idempotent: the intersection of anything with itself is itself. Monomorphisms generalize this property to arbitrary categories. A morphism is a monomorphism if it is idempotent with respect to pullbacks. The categorical dual of a monomorphism is an epimorphism, that is, a monomorphism in a category C is an epimorphism in the dual category Cop. Every section is a monomorphism, and every retraction is an epimorphism. Relation to invertibility Left-invertible morphisms are necessarily monic: if l is a left inverse for f (meaning l is a morphism and ), then f is monic, as A left-invertible morphism is called a split mono or a section. However, a monomorphism need not be left-invertible. For example, in the category Group of all groups and group homomorphisms among them, if H is a subgroup of G then the inclusion is always a monomorphism; but f has a left inverse in the category if and only if H has a normal complement in G. A morphism is monic if and only if the induced map , defined by for all morphisms , is injective for all objects Z. Examples Every morphism in a concrete category whose underlying function is injective is a monomorphism; in other words, if morphisms are actually functions between sets, then any morphism which is a one-to-one function will necessarily be a monomorphism in the categorical sense. In the category of sets the converse also holds, so the monomorphisms are exactly the injective morphisms. The converse also holds in most naturally occurring categories of algebras because of the existence of a free object on one generator. In particular, it is true in the categories of all groups, of all rings, and in any abelian category. It is not true in general, however, that all monomorphisms must be injective in other categories; that is, there are settings in which the morphisms are functions between sets, but one can have a function that is not injective and yet is a monomorphism in the categorical sense. For example, in the category Div of divisible (abelian) groups and group homomorphisms between them there are monomorphisms that are not injective: consider, for example, the quotient map , where Q is the rationals under addition, Z the integers (also considered a group under addition), and Q/Z is the corresponding quotient group. This is not an injective map, as for example every integer is mapped to 0. Nevertheless, it is a monomorphism in this category. This follows from the implication , which we will now prove. If , where G is some divisible group, and , then . Now fix some . Without loss of generality, we may assume that (otherwise, choose −x instead). Then, letting , since G is a divisible group, there exists some such that , so . From this, and , it follows that Since , it follows that , and thus . This says that , as desired. To go from that implication to the fact that q is a monomorphism, assume that for some morphisms , where G is some divisible group. Then , where . (Since , and , it follows that ). From the implication just proved, . Hence q is a monomorphism, as claimed. Properties In a topos, every mono is an equalizer, and any map that is both monic and epic is an isomorphism. Every isomorphism is monic. Related concepts There are also useful concepts of regular monomorphism, extremal monomorphism, immediate monomorphism, strong monomorphism, and split monomorphism. A monomorphism is said to be regular if it is an equalizer of some pair of parallel morphisms. A monomorphism is said to be extremal if in each representation , where is an epimorphism, the morphism is automatically an isomorphism. A monomorphism is said to be immediate if in each representation , where is a monomorphism and is an epimorphism, the morphism is automatically an isomorphism. A monomorphism is said to be strong if for any epimorphism and any morphisms and such that , there exists a morphism such that and . A monomorphism is said to be split if there exists a morphism such that (in this case is called a left-sided inverse for ). Terminology The companion terms monomorphism and epimorphism were originally introduced by Nicolas Bourbaki; Bourbaki uses monomorphism as shorthand for an injective function. Early category theorists believed that the correct generalization of injectivity to the context of categories was the cancellation property given above. While this is not exactly true for monic maps, it is very close, so this has caused little trouble, unlike the case of epimorphisms. Saunders Mac Lane attempted to make a distinction between what he called monomorphisms, which were maps in a concrete category whose underlying maps of sets were injective, and monic maps, which are monomorphisms in the categorical sense of the word. This distinction never came into general use. Another name for monomorphism is extension, although this has other uses too. See also Embedding Nodal decomposition Subobject Notes References External links Morphisms Algebraic properties of elements
Monomorphism
[ "Mathematics" ]
1,289
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Category theory", "Mathematical relations", "Morphisms" ]
59,539
https://en.wikipedia.org/wiki/Epimorphism
In category theory, an epimorphism is a morphism f : X → Y that is right-cancellative in the sense that, for all objects Z and all morphisms , Epimorphisms are categorical analogues of onto or surjective functions (and in the category of sets the concept corresponds exactly to the surjective functions), but they may not exactly coincide in all contexts; for example, the inclusion is a ring epimorphism. The dual of an epimorphism is a monomorphism (i.e. an epimorphism in a category C is a monomorphism in the dual category Cop). Many authors in abstract algebra and universal algebra define an epimorphism simply as an onto or surjective homomorphism. Every epimorphism in this algebraic sense is an epimorphism in the sense of category theory, but the converse is not true in all categories. In this article, the term "epimorphism" will be used in the sense of category theory given above. For more on this, see below. Examples Every morphism in a concrete category whose underlying function is surjective is an epimorphism. In many concrete categories of interest the converse is also true. For example, in the following categories, the epimorphisms are exactly those morphisms that are surjective on the underlying sets: Set: sets and functions. To prove that every epimorphism f: X → Y in Set is surjective, we compose it with both the characteristic function g1: Y → {0,1} of the image f(X) and the map g2: Y → {0,1} that is constant 1. Rel: sets with binary relations and relation-preserving functions. Here we can use the same proof as for Set, equipping {0,1} with the full relation {0,1}×{0,1}. Pos: partially ordered sets and monotone functions. If f : (X, ≤) → (Y, ≤) is not surjective, pick y0 in Y \ f(X) and let g1 : Y → {0,1} be the characteristic function of {y | y0 ≤ y} and g2 : Y → {0,1} the characteristic function of {y | y0 < y}. These maps are monotone if {0,1} is given the standard ordering 0 < 1. Grp: groups and group homomorphisms. The result that every epimorphism in Grp is surjective is due to Otto Schreier (he actually proved more, showing that every subgroup is an equalizer using the free product with one amalgamated subgroup); an elementary proof can be found in (Linderholm 1970). FinGrp: finite groups and group homomorphisms. Also due to Schreier; the proof given in (Linderholm 1970) establishes this case as well. Ab: abelian groups and group homomorphisms. K-Vect: vector spaces over a field K and K-linear transformations. Mod-R: right modules over a ring R and module homomorphisms. This generalizes the two previous examples; to prove that every epimorphism f: X → Y in Mod-R is surjective, we compose it with both the canonical quotient map g 1: Y → Y/f(X) and the zero map g2: Y → Y/f(X). Top: topological spaces and continuous functions. To prove that every epimorphism in Top is surjective, we proceed exactly as in Set, giving {0,1} the indiscrete topology, which ensures that all considered maps are continuous. HComp: compact Hausdorff spaces and continuous functions. If f: X → Y is not surjective, let y ∈ Y − fX. Since fX is closed, by Urysohn's Lemma there is a continuous function g1:Y → [0,1] such that g1 is 0 on fX and 1 on y. We compose f with both g1 and the zero function g2: Y → [0,1]. However, there are also many concrete categories of interest where epimorphisms fail to be surjective. A few examples are: In the category of monoids, Mon, the inclusion map N → Z is a non-surjective epimorphism. To see this, suppose that g1 and g2 are two distinct maps from Z to some monoid M. Then for some n in Z, g1(n) ≠ g2(n), so g1(−n) ≠ g2(−n). Either n or −n is in N, so the restrictions of g1 and g2 to N are unequal. In the category of algebras over commutative ring R, take R[N] → R[Z], where R[G] is the monoid ring of the monoid G and the morphism is induced by the inclusion N → Z as in the previous example. This follows from the observation that 1 generates the algebra R[Z] (note that the unit in R[Z] is given by 0 of Z), and the inverse of the element represented by n in Z is just the element represented by −n. Thus any homomorphism from R[Z] is uniquely determined by its value on the element represented by 1 of Z. In the category of rings, Ring, the inclusion map Z → Q is a non-surjective epimorphism; to see this, note that any ring homomorphism on Q is determined entirely by its action on Z, similar to the previous example. A similar argument shows that the natural ring homomorphism from any commutative ring R to any one of its localizations is an epimorphism. In the category of commutative rings, a finitely generated homomorphism of rings f : R → S is an epimorphism if and only if for all prime ideals P of R, the ideal Q generated by f(P) is either S or is prime, and if Q is not S, the induced map Frac(R/P) → Frac(S/Q) is an isomorphism (EGA IV 17.2.6). In the category of Hausdorff spaces, Haus, the epimorphisms are precisely the continuous functions with dense images. For example, the inclusion map Q → R, is a non-surjective epimorphism. The above differs from the case of monomorphisms where it is more frequently true that monomorphisms are precisely those whose underlying functions are injective. As for examples of epimorphisms in non-concrete categories: If a monoid or ring is considered as a category with a single object (composition of morphisms given by multiplication), then the epimorphisms are precisely the right-cancellable elements. If a directed graph is considered as a category (objects are the vertices, morphisms are the paths, composition of morphisms is the concatenation of paths), then every morphism is an epimorphism. Properties Every isomorphism is an epimorphism; indeed only a right-sided inverse is needed: if there exists a morphism j : Y → X such that fj = idY, then f: X → Y is easily seen to be an epimorphism. A map with such a right-sided inverse is called a split epi. In a topos, a map that is both a monic morphism and an epimorphism is an isomorphism. The composition of two epimorphisms is again an epimorphism. If the composition fg of two morphisms is an epimorphism, then f must be an epimorphism. As some of the above examples show, the property of being an epimorphism is not determined by the morphism alone, but also by the category of context. If D is a subcategory of C, then every morphism in D that is an epimorphism when considered as a morphism in C is also an epimorphism in D. However the converse need not hold; the smaller category can (and often will) have more epimorphisms. As for most concepts in category theory, epimorphisms are preserved under equivalences of categories: given an equivalence F : C → D, a morphism f is an epimorphism in the category C if and only if F(f) is an epimorphism in D. A duality between two categories turns epimorphisms into monomorphisms, and vice versa. The definition of epimorphism may be reformulated to state that f : X → Y is an epimorphism if and only if the induced maps are injective for every choice of Z. This in turn is equivalent to the induced natural transformation being a monomorphism in the functor category SetC. Every coequalizer is an epimorphism, a consequence of the uniqueness requirement in the definition of coequalizers. It follows in particular that every cokernel is an epimorphism. The converse, namely that every epimorphism be a coequalizer, is not true in all categories. In many categories it is possible to write every morphism as the composition of an epimorphism followed by a monomorphism. For instance, given a group homomorphism f : G → H, we can define the group K = im(f) and then write f as the composition of the surjective homomorphism G → K that is defined like f, followed by the injective homomorphism K → H that sends each element to itself. Such a factorization of an arbitrary morphism into an epimorphism followed by a monomorphism can be carried out in all abelian categories and also in all the concrete categories mentioned above in (though not in all concrete categories). Related concepts Among other useful concepts are regular epimorphism, extremal epimorphism, immediate epimorphism, strong epimorphism, and split epimorphism. An epimorphism is said to be regular if it is a coequalizer of some pair of parallel morphisms. An epimorphism is said to be extremal if in each representation , where is a monomorphism, the morphism is automatically an isomorphism. An epimorphism is said to be immediate if in each representation , where is a monomorphism and is an epimorphism, the morphism is automatically an isomorphism. An epimorphism is said to be strong if for any monomorphism and any morphisms and such that , there exists a morphism such that and . An epimorphism is said to be split if there exists a morphism such that (in this case is called a right-sided inverse for ). There is also the notion of homological epimorphism in ring theory. A morphism f: A → B of rings is a homological epimorphism if it is an epimorphism and it induces a full and faithful functor on derived categories: D(f) : D(B) → D(A). A morphism that is both a monomorphism and an epimorphism is called a bimorphism. Every isomorphism is a bimorphism but the converse is not true in general. For example, the map from the half-open interval [0,1) to the unit circle S1 (thought of as a subspace of the complex plane) that sends x to exp(2πix) (see Euler's formula) is continuous and bijective but not a homeomorphism since the inverse map is not continuous at 1, so it is an instance of a bimorphism that is not an isomorphism in the category Top. Another example is the embedding Q → R in the category Haus; as noted above, it is a bimorphism, but it is not bijective and therefore not an isomorphism. Similarly, in the category of rings, the map Z → Q is a bimorphism but not an isomorphism. Epimorphisms are used to define abstract quotient objects in general categories: two epimorphisms f1 : X → Y1 and f2 : X → Y2 are said to be equivalent if there exists an isomorphism j : Y1 → Y2 with j f1 = f2. This is an equivalence relation, and the equivalence classes are defined to be the quotient objects of X. Terminology The companion terms epimorphism and monomorphism were first introduced by Bourbaki. Bourbaki uses epimorphism as shorthand for a surjective function. Early category theorists believed that epimorphisms were the correct analogue of surjections in an arbitrary category, similar to how monomorphisms are very nearly an exact analogue of injections. Unfortunately this is incorrect; strong or regular epimorphisms behave much more closely to surjections than ordinary epimorphisms. Saunders Mac Lane attempted to create a distinction between epimorphisms, which were maps in a concrete category whose underlying set maps were surjective, and epic morphisms, which are epimorphisms in the modern sense. However, this distinction never caught on. It is a common mistake to believe that epimorphisms are either identical to surjections or that they are a better concept. Unfortunately this is rarely the case; epimorphisms can be very mysterious and have unexpected behavior. It is very difficult, for example, to classify all the epimorphisms of rings. In general, epimorphisms are their own unique concept, related to surjections but fundamentally different. See also List of category theory topics Monomorphism Notes References External links Morphisms Algebraic properties of elements
Epimorphism
[ "Mathematics" ]
2,956
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Category theory", "Morphisms" ]
59,545
https://en.wikipedia.org/wiki/Sagrada%20Fam%C3%ADlia
The Basílica i Temple Expiatori de la Sagrada Família, otherwise known as Sagrada Família, is a church under construction in the Eixample district of Barcelona, Catalonia, Spain. It is the largest unfinished Catholic church in the world. Designed by the Catalan architect Antoni Gaudí (1852–1926), in 2005 his work on Sagrada Família was added to an existing (1984) UNESCO World Heritage Site, "Works of Antoni Gaudí". On 7 November 2010, Pope Benedict XVI consecrated the church and proclaimed it a minor basilica. On 19 March 1882, construction of Sagrada Família began under architect Francisco de Paula del Villar. In 1883, when Villar resigned, Gaudí took over as chief architect, transforming the project with his architectural and engineering style, combining Gothic and curvilinear Art Nouveau forms. Gaudí devoted the remainder of his life to the project, and he is buried in the church's crypt. At the time of his death in 1926, less than a quarter of the project was complete. Relying solely on private donations, Sagrada Família's construction progressed slowly and was interrupted by the Spanish Civil War. In July 1936, anarchists from the FAI set fire to the crypt and broke their way into the workshop, partially destroying Gaudí's original plans. In 1939, Francesc de Paula Quintana took over site management, which was able to go on with the material that was saved from Gaudí's workshop and that was reconstructed from published plans and photographs. Construction resumed to intermittent progress in the 1950s. Advancements in technologies such as computer-aided design and computerised numerical control (CNC) have since enabled faster progress and construction passed the midpoint in 2010. In 2014, it was anticipated that the building would be completed by 2026, the centenary of Gaudí's death, but this schedule was threatened by work slowdowns caused by the 2020–2021 depths of the COVID-19 pandemic. In March 2024, an updated forecast reconfirmed a likely completion of the building in 2026, though the announcement stated that work on sculptures, decorative details and a controversial proposed stairway leading to what will eventually be the main entrance is expected to continue until 2034. Describing Sagrada Família, art critic Rainer Zerbst said "it is probably impossible to find a church building anything like it in the entire history of art", and Paul Goldberger describes it as "the most extraordinary personal interpretation of Gothic architecture since the Middle Ages". Though sometimes described as a cathedral, the basilica is not the cathedral church of the Archdiocese of Barcelona; that title belongs to the Cathedral of the Holy Cross and Saint Eulalia (Barcelona Cathedral). History Origins Sagrada Família was inspired by a bookseller, , founder of Asociación Espiritual de Devotos de San José (Spiritual Association of Devotees of St. Joseph). After a visit to the Vatican in 1872, Bocabella returned from Italy with the intention of building a church inspired by the basilica at Loreto. The apse crypt of the church, funded by donations, was begun 19 March 1882, on the festival of St. Joseph, to the design of the architect Francisco de Paula del Villar, whose plan was for a Gothic revival church of a standard form. The apse crypt was completed before Villar's resignation on 18 March 1883, when Antoni Gaudí assumed responsibility for its design, which he changed radically. Gaudi began work on the church in 1883 but was not appointed Architect Director until 1884. 20th century On the subject of the extremely long construction period, Gaudí is said to have remarked: "My client is not in a hurry." When Gaudí died in 1926, the basilica was between 15 and 25 percent complete. After Gaudí's death, work continued under the direction of his main disciple Domènec Sugrañes i Gras until interrupted by the Spanish Civil War in 1936. Parts of the unfinished basilica and Gaudí's models and workshop were destroyed during the war. The present design is based on reconstructed versions of the plans that were burned in a fire as well as on modern adaptations. Since 1940, the architects Francesc Quintana, Isidre Puig Boada, Lluís Bonet i Garí and Francesc Cardoner have carried on the work. The illumination was designed by Carles Buïgas. The director until 2012 was the son of Lluís Bonet, Jordi Bonet i Armengol. Armengol began introducing computers into the design and construction process in the 1980s. 21st century The central nave vaulting was completed in 2000 and the main tasks since then have been the construction of the transept vaults and apse. In 2002, the Sagrada Família Schools building was relocated from the eastern corner of the site to the southern corner, and began housing an exhibition. The school was originally designed by Gaudí in 1909 for the children of the construction workers. , work concentrated on the crossing and supporting structure for the main steeple of Jesus Christ as well as the southern enclosure of the central nave, which will become the Glory façade. Computer-aided design technology has allowed stone to be shaped off-site by a CNC milling machine, whereas in the 20th century the stone was carved by hand. In 2008, some renowned Catalan architects advocated halting construction to respect Gaudí's original designs, which, although they were not exhaustive and were partially destroyed, have been partially reconstructed in recent years. Since 2013, AVE high-speed trains have passed near Sagrada Família through a tunnel that runs beneath the centre of Barcelona. The tunnel's construction, which began on 26 March 2010, was controversial. The Ministry of Public Works of Spain () claimed the project posed no risk to the church. Sagrada Família engineers and architects disagreed, saying there was no guarantee that the tunnel would not affect the stability of the building. The Board of the Sagrada Família () and the neighborhood association (AVE by the Coast) led a campaign against this route for the AVE, without success. In October 2010, the tunnel boring machine reached the church underground under the location of the building's principal façade. Service through the tunnel was inaugurated on 8 January 2013. Track in the tunnel makes use of a system by Edilon Sedra in which the rails are embedded in an elastic material to dampen vibrations. The main nave was covered and an organ installed in mid-2010, allowing the still-unfinished building to be used for liturgies. The church was consecrated by Pope Benedict XVI on 7 November 2010 in front of a congregation of 6,500people. A further 50,000 people followed the consecration Mass from outside the basilica, where more than 100bishops and 300priests were on hand to distribute Holy Communion. In 2012, Barcelona-born Jordi Faulí i Oller took over as architect of the project. Mark Burry of New Zealand serves as Executive Architect and Researcher. Sculptures by J. Busquets, Etsuro Sotoo and Josep Maria Subirachs decorate the fantastical façades. Chief architect Jordi Faulí announced in October 2015 that construction was 70 percent complete and had entered its final phase of raising six immense steeples. The steeples and most of the church's structure were planned to be completed by 2026, the centennial of Gaudí's death; as of a 2017 estimate, decorative elements should be complete by 2030 or 2032. Visitor entrance fees of €15 to €20 finance the annual construction budget of €25million. Completion of the structure will use post-tensioned stone. Starting on 9 July 2017, an international mass is celebrated at the basilica every Sunday and holy day of obligation, at 9a.m., and is open to the public (until the church is full). Occasionally, Mass is celebrated at other times, where attendance requires an invitation. When masses are scheduled, instructions to obtain an invitation are posted on the basilica's website. In addition, visitors may pray in the chapel of the Blessed Sacrament and Penitence. The stone initially used in its construction came from the Montserrat mountain, but it became clear that as quarrying there went deeper, the stone was increasingly fragile and an alternative source had to be found. Since 2018 stone of the type needed to complete the construction has been sourced from the Withnell Quarry in Brinscall, near Chorley, England. Historical photographs of Sagrada Família Incidents On 19 April 2011, an arsonist started a small fire in the sacristy which forced the evacuation of tourists and construction workers. The sacristy was damaged, and the fire took 45 minutes to contain. On 11 March 2020, during the COVID-19 pandemic in Spain, construction temporarily stopped and the basilica was closed. This was the first time the construction had been halted since the Spanish Civil War. The Gaudí House Museum in Park Güell was also closed. The basilica reopened, initially to key workers, on 4 July 2020. Local residents have concerns about plans to build a large stairway leading up to the basilica's main entrance, unfinished at the time, which could require the demolition of three city blocks: the homes to 1,000 people as well as some businesses. Design The style of Sagrada Família is variously likened to Spanish Late Gothic, Catalan Modernism or Art Nouveau. While the style falls within the Art Nouveau period, Nikolaus Pevsner points out that, along with Charles Rennie Mackintosh in Glasgow, Scotland, Gaudí carried the Art Nouveau style far beyond its usual application as a surface decoration. Plan While never a cathedral, Sagrada Família was planned from the outset to be a large building, comparable in size to a cathedral. Its ground-plan has obvious links to earlier Spanish cathedrals such as Burgos Cathedral, León Cathedral and Seville Cathedral. In common with Catalan and many other European Gothic cathedrals, Sagrada Família is short in comparison to its width, and has a great complexity of parts, which include double aisles, an ambulatory with a chevet of seven apsidal chapels, a multitude of steeples and three portals, each widely different in structure as well as ornament. Where it is common for cathedrals in Spain to be surrounded by numerous chapels and ecclesiastical buildings, the layout of Sagrada Família has an unusual feature: a covered passage or cloister which forms a rectangle enclosing the church and passing through the narthex of each of its three portals. With this peculiarity aside, the plan, influenced by Villar's crypt, barely hints at the complexity of Gaudí's design or its deviations from traditional church architecture. There are no exact right angles to be seen inside or outside the church, and few straight lines in the design. Spires Gaudí's original design calls for a total of eighteen spires, representing in ascending order of height the Twelve Apostles, the four Evangelists, the Virgin Mary, and, tallest of all, Jesus Christ. Thirteen spires had been completed , corresponding to four apostles at the Nativity façade, four apostles at the Passion façade, the four Evangelists, and the Virgin Mary. The Evangelists' spires are surmounted by sculptures of their traditional symbols: a winged bull (Saint Luke), a winged man (Saint Matthew), an eagle (Saint John), and a winged lion (Saint Mark). The central spire of Jesus Christ is to be surmounted by a giant cross; its total height () will be less than that of Montjuïc hill in Barcelona, as Gaudí believed that his creation should not surpass God's. The lower spires are surmounted by communion hosts with sheaves of wheat and chalices with bunches of grapes, representing the Eucharist. Plans call for tubular bells to be placed within the spires, driven by the force of the wind, and driving sound down into the interior of the church. Gaudí performed acoustic studies to achieve the appropriate acoustic results inside the temple. However, only one bell is currently in place. The completion of the Jesus Christ spire will make Sagrada Família the tallest church building in the world— taller than the current record-holder, Ulm Minster, which is at its highest point. On 29 November 2021, a twelve-pointed illuminated crystal star was installed on one of the main towers of the basilica dedicated to the Virgin Mary. The construction makes use of post-tensioned stone panels, which are pre-assembled before incorporation into the main structure; using this method has significant structural and operational benefits. Façades The church is designed to have three grand façades: the Nativity façade to the east, the Passion façade to the west, and the Glory façade to the south (incomplete). The Nativity façade was built before work was interrupted in 1935 and bears the most direct Gaudí influence. The Passion façade was built according to the design that Gaudi created in 1917. The construction began in 1954, and the steeples, built over the elliptical plan, were finished in 1976. It is especially striking for its spare, gaunt, tormented characters, including emaciated figures of Christ being scourged at the pillar; and Christ on the Cross. These controversial designs are the work of Josep Maria Subirachs. The Glory façade, on which construction began in 2002, will be the largest and most monumental of the three and will represent one's ascension to God. It will also depict various scenes such as Hell, Purgatory, and will include elements such as the seven deadly sins and the seven heavenly virtues. Nativity Façade Constructed between 1893 and 1936, the Nativity façade was the first façade to be completed. Dedicated to the birth of Jesus, it is decorated with scenes reminiscent of elements of life. Characteristic of Gaudí's naturalistic style, the sculptures are ornately arranged and decorated with scenes and images from nature, each a symbol in its own manner. For instance, the three porticos are separated by two large columns, and at the base of each lies a turtle or a tortoise (one to represent the land and the other the sea; each are symbols of time as something set in stone and unchangeable). In contrast to the figures of turtles and their symbolism, two chameleons can be found at either side of the façade and are symbolic of change. The façade faces the rising sun to the northeast, a symbol for the birth of Christ. It is divided into three porticos, each of which represents a theological virtue (Hope, Faith and Charity). The Tree of Life rises above the door of Jesus in the portico of Charity. Four steeples complete the façade and are each dedicated to a Saint (Matthias, Barnabas, Jude the Apostle, and Simon the Zealot). Originally, Gaudí intended for this façade to be polychromed, for each archivolt to be painted with a wide array of colours. He wanted every statue and figure to be painted. In this way the figures of humans would appear as much alive as the figures of plants and animals. Gaudí chose this façade to embody the structure and decoration of the whole church. He was well aware that he would not finish the church and that he would need to set an artistic and architectural example for others to follow. He also chose for this façade to be the first on which to begin construction and for it to be, in his opinion, the most attractive and accessible to the public. He believed that if he had begun construction with the Passion Façade, one that would be hard and bare (as if made of bones), before the Nativity Façade, people would have withdrawn at the sight of it. Some of the statues were destroyed in 1936 during the Spanish Civil War, and subsequently were reconstructed by the Japanese artist Etsuro Sotoo. Passion Façade In contrast to the highly decorated Nativity Façade, the Passion Façade is austere, plain and simple, with ample bare stone, and is carved with harsh straight lines to resemble the bones of a skeleton. Dedicated to the Passion of Christ, the suffering of Jesus during his crucifixion, the façade was intended to portray the sins of man. Construction began in 1954, following the drawings and instructions left by Gaudí for future architects and sculptors. The steeples were completed in 1976, and in 1987 a team of sculptors, headed by Josep Maria Subirachs, began work sculpting the various scenes and details of the façade. They aimed to give a rigid, angular form to provoke a dramatic effect. Gaudí intended for this façade to strike fear into the onlooker. He wanted to "break" arcs and "cut" columns, and to use the effect of chiaroscuro (dark angular shadows contrasted by harsh rigid light) to further show the severity and brutality of Christ's sacrifice. Facing the setting sun, indicative and symbolic of the death of Christ, the Passion Façade is supported by six large and inclined columns, designed to resemble strained muscles. Above there is a pyramidal pediment, made up of eighteen bone-shaped columns, which culminate in a large cross with a crown of thorns. Each of the four steeples is dedicated to an apostle (James, Thomas, Philip, and Bartholomew) and, like the Nativity Façade, there are three porticos, each representing the theological virtues, though in a much different light. The scenes sculpted into the façade may be divided into three levels, which ascend in an S form and reproduce the Stations of the Cross (Via Crucis of Christ). The lowest level depicts scenes from Jesus' last night before the crucifixion, including the Last Supper, Kiss of Judas, Ecce homo, and the Sanhedrin trial of Jesus. The middle level portrays the Calvary, or Golgotha, of Christ, and includes The Three Marys, Saint Longinus, Saint Veronica, and a hollow-face illusion of Christ on the Veil of Veronica. In the third and final level the Death, Burial and the Resurrection of Christ can be seen. A bronze figure situated on a bridge creating a link between the steeples of Saint Bartholomew and Saint Thomas represents the Ascension of Jesus. The façade contains a magic square based on the magic square in the 1514 print Melencolia I. The square is rotated and one number in each row and column is reduced by one, so the rows and columns add up to 33 instead of the standard 34 for a 4x4 magic square. Glory Façade The largest and most striking of the façades will be the Glory Façade, on which construction began in 2002. It will be the principal façade and will offer access to the central nave. Dedicated to the Celestial Glory of Jesus, it represents the road to God: Death, Final Judgment, and Glory, while Hell is left for those who deviate from God's will. Aware that he would not live long enough to see this façade completed, Gaudí made a model which was demolished in 1936, whose original fragments were used as the basis for the development of the design for the façade. The completion of this façade may require the partial demolition of the block with buildings across the Carrer de Mallorca. The decision was expected to be proposed in May 2023. To reach the Glory Portico, the large staircase will lead over the underground passage built over Carrer de Mallorca with the decoration representing Hell and vice. On other projects, Carrer de Mallorca will have to go underground. It will be decorated with demons, idols, false gods, heresy and schisms, etc. Purgatory and death will also be depicted, the latter using tombs along the ground. The portico will have seven large columns dedicated to gifts of the Holy Spirit. At the base of the columns there will be representations of the seven deadly sins, and at the top, the seven heavenly virtues. Gifts: wisdom, understanding, counsel, fortitude, knowledge, piety and fear of the Lord. Sins: greed, lust, pride, gluttony, sloth, wrath, envy. Virtues: kindness, diligence, patience, charity, temperance, humility, chastity. This façade will have five doors corresponding to the five naves of the temple, with the central one having a triple entrance, that will give the Glory Façade a total seven doors representing the sacraments: Baptism Confirmation Eucharist Penance Holy orders Marriage Anointing of the sick In September 2008, the doors of the Glory façade, by Subirachs, were installed. These central doors bear the text of the Our Father prayer in Catalan in high relief, accompanied with the words "Our Father" and "Give us this day our daily bread" inscribed in fifty different languages. The handles of the door are the letters "A" and "G," forming the initials of Antoni Gaudí, within the phrase ("lead us not into temptation"). Interior The church plan is that of a Latin cross with five aisles. The central nave vaults reach while the side nave vaults reach . The transept has three aisles. The columns are on a grid. However, the columns of the apse, resting on del Villar's foundation, do not adhere to the grid, requiring a section of columns of the ambulatory to transition to the grid thus creating a horseshoe pattern to the layout of those columns. The crossing rests on the four central columns of porphyry supporting a great hyperboloid surrounded by two rings of twelve hyperboloids (currently under construction). The central vault reaches . The apse is capped by a hyperboloid vault reaching . Gaudí intended that a visitor standing at the main entrance be able to see the vaults of the nave, crossing, and apse, thus the graduated increase in vault loft. There are gaps in the floor of the apse, providing a view into the crypt below. The columns of the interior are a unique Gaudí design. Besides branching to support their load, their ever-changing surfaces are the result of the intersection of various geometric forms. The simplest example is that of a square base evolving into an octagon as the column rises, then a sixteen-sided form, and eventually to a circle. This effect is the result of a three-dimensional intersection of helicoidal columns (for example a square cross-section column twisting clockwise and a similar one twisting counterclockwise). Essentially none of the interior surfaces are flat; the ornamentation is comprehensive and rich, consisting in large part of abstract shapes which combine smooth curves and jagged points. Even detail-level work such as the iron railings for balconies and stairways are full of curvaceous elaboration. Organ In 2010 an organ was installed in the chancel by the Blancafort Orgueners de Montserrat organ builders. The instrument has 26 stops (1,492 pipes) on two manuals and a pedalboard. To overcome the unique acoustical challenges posed by the church's architecture and vast size, several additional organs will be installed at various points within the building. These instruments will be playable separately (from their own individual consoles) and simultaneously (from a single mobile console), yielding an organ of some 8,000 pipes when completed. Geometric details The steeples on the Nativity façade are crowned with geometrically shaped tops that are reminiscent of Cubism (they were finished around 1930), and the intricate decoration is contemporary to the style of Art Nouveau, but Gaudí's unique style drew primarily from nature, not other artists or architects, and resists categorization. Gaudí used hyperboloid structures in later designs for Sagrada Família (more obviously after 1914). However, there are a few places on the nativity façade—a design not equated with Gaudí's ruled-surface design—where the hyperboloid appears. For example, all around the scene with the pelican, there are numerous examples (including the basket held by one of the figures). There is a hyperboloid adding structural stability to the cypress tree (by connecting it to the bridge). Finally, the "bishop's mitre" spires are capped with hyperboloid structures. In his later designs, ruled surfaces are prominent in the nave's vaults and windows and the surfaces of the Passion Façade. Symbolism Themes throughout the decoration include words from the liturgy. The steeples are decorated with words such as "Hosanna", "Excelsis", and "Sanctus"; the great doors of the Passion façade reproduce excerpts of the Passion of Jesus from the New Testament in various languages, mainly Catalan; and the Glory façade is to be decorated with the words from the Apostles' Creed, while its main door reproduces the entire Lord's Prayer in Catalan, surrounded by multiple variations of "Give us this day our daily bread" in other languages. The three entrances symbolize the three virtues: Faith, Hope and Love. Each of them is also dedicated to a part of Christ's life. The Nativity Façade is dedicated to his birth; it also has a cypress tree which symbolizes the tree of life. The Glory Façade is dedicated to his Christ's glory period. The Passion Façade is symbolic of Christ's suffering. The apse steeple bears Latin text of the Hail Mary prayer. Areas of the sanctuary will be designated to represent various concepts, such as saints, virtues and sins, and secular concepts such as regions, presumably with decoration to match. Burials Josep Maria Bocabella Antoni Gaudí Appraisal The art historian Nikolaus Pevsner, writing in the 1960s, referred to Gaudí's buildings as growing "like sugar loaves and anthills" and describes the ornamenting of buildings with shards of broken pottery as possibly "bad taste" but handled with vitality and "ruthless audacity". The building's design itself has been polarizing. Assessments by Gaudí's fellow architects were generally positive; Louis Sullivan greatly admired it, describing Sagrada Família as the "greatest piece of creative architecture in the last twenty-five years. It is spirit symbolised in stone!" Walter Gropius praised Sagrada Família, describing the building's walls as "a marvel of technical perfection". Time magazine called it "sensual, spiritual, whimsical, exuberant". However, author and critic George Orwell, mistakenly referring to it as a cathedral, called it "one of the most hideous buildings in the world". Author James A. Michener called it "one of the strangest-looking serious buildings in the world" and British historian Gerald Brenan stated about the building "Not even in the European architecture of the period can one discover anything so vulgar or pretentious." The building's distinctive silhouette has nevertheless become symbolic of Barcelona itself, drawing an estimated 3 million visitors annually. World Heritage status In 1984, UNESCO granted World Heritage Site designations to three Gaudí buildings in Barcelona, though not yet including Sagrada Família, under the collective designation "Works of Antoni GaudíNo 320 bis" (items 320-001 to 320-003), testifying "to Gaudí's exceptional creative contribution to the development of architecture and building technology", "having represented el Modernisme of Catalonia" and "anticipated and influenced many of the forms and techniques that were relevant to the development of modern construction in the 20th century". In 2005, UNESCO extended the inscription for Works of Antoni GaudíNo 320 bis to include four additional buildings in Barcelona, with item 320-005 listed as two specific sections of Sagrada Família: the Crypt and the Nativity façade. Visitor access Visitors can access the Nave, Crypt, Museum, Shop, and the Passion and Nativity steeples. Entrance to either of the steeples requires a reservation and advance purchase of a ticket. Access is possible only by lift (elevator) and a short walk up the remainder of the steeples to the bridge between the steeples. Descent is via a very narrow spiral staircase of over 300 steps. There is a posted caution for those with medical conditions. As of June 2017, online ticket purchase has been available. As of August 2010, there had been a service whereby visitors could buy an entry code either at Servicaixa ATM kiosks (part of CaixaBank) or online. International masses The Archdiocese of Barcelona holds an international mass at the Basilica of the Sagrada Família every Sunday and on holy days of obligation. Date and time: Every Sunday and on holy days of obligation at 9 am. There is no charge for attending mass but capacity is limited. Visitors are asked to dress appropriately and behave respectfully. Funding and building permit Construction on Sagrada Família is not supported by any government or official church sources. Private patrons funded the initial stages. Money from tickets purchased by tourists is now used to pay for the work, and private donations are accepted. The construction budget for 2009 was €18 million. In October 2018, Sagrada Família trustees agreed to pay city authorities €36 million for a building permit, after 136 years of unlicensed construction. Most of the funds would be directed to improve the access between the church and the Barcelona Metro. The permit was issued by the city on 7 June 2019. See also List of Catholic basilicas List of Gaudí buildings List of Modernista buildings in Barcelona Sagrada Família (Barcelona Metro) Notes References Bibliography Further reading External links Works of Antoni Gaudí UNESCO Collection on Google Arts and Culture Gaudí, Sagrada Família (video), Smarthistory Antoni Gaudí buildings Art Nouveau church buildings in Spain Articles containing video clips Basilica churches in Spain Bien de Interés Cultural landmarks in the Province of Barcelona Buildings and structures under construction in Spain Eixample Hyperboloid structures Mathematics and art Modernisme architecture in Barcelona Roman Catholic churches in Barcelona Skyscrapers in Barcelona Tourist attractions in Barcelona Visionary environments Votive churches World Heritage Sites in Catalonia
Sagrada Família
[ "Technology" ]
6,213
[ "Structural system", "Hyperboloid structures" ]
59,546
https://en.wikipedia.org/wiki/Dial-up%20Internet%20access
Dial-up Internet access is a form of Internet access that uses the facilities of the public switched telephone network (PSTN) to establish a connection to an Internet service provider (ISP) by dialing a telephone number on a conventional telephone line which could be connected using an RJ-11 connector. Dial-up connections use modems to decode audio signals into data to send to a router or computer, and to encode signals from the latter two devices to send to another modem at the ISP. Dial-up Internet reached its peak popularity during the dot-com bubble with the likes of ISPs such as Sprint, EarthLink, MSN Dial-up, NetZero, Prodigy, and America Online (more commonly known as AOL). This was in large part because broadband Internet did not become widely used until well into the 2000s. Since then, most dial-up access has been replaced by broadband. History In 1979, Tom Truscott and Jim Ellis, graduates of Duke University, created an early predecessor to dial-up Internet access called the Usenet. The Usenet was a UNIX based system that used a dial-up connection to transfer data through telephone modems. Dial-up Internet access has existed since the 1980s via public providers such as NSFNET-linked universities in the United States. In the United Kingdom, JANET linked academic users, including a connection to the ARPANET via University College London, while Brunel University and the University of Kent offered dial-up UUCP to non-academic users in the late 1980s. Commercial dial-up Internet access was first offered in 1992 by Sprint in the United States and by Pipex in the United Kingdom. After the introduction of commercial broadband in the late 1990s, dial-up became less popular. In the United States, the availability of dial-up Internet access dropped from 40% of Americans in the early 2000s to 3% in the early 2010s. It is still used where other forms are not available or where the cost is too high, as in some rural or remote areas. Modems Because there was no technology to allow different carrier signals on a telephone line at the time, dial-up Internet access relied on using audio communication. A modem would take the digital data from a computer, modulate it into an audio signal and send it to a receiving modem. This receiving modem would demodulate the signal from analogue noise, back into digital data for the computer to process. The simplicity of this arrangement meant that people would be unable to use their phone line for verbal communication until the Internet call was finished. The Internet speed using this technology can drop to 21.6 kbit/s or less. Poor condition of the telephone line, high noise level and other factors all affect dial-up speed. For this reason, it is popularly called the 21600 Syndrome. Availability Dial-up connections to the Internet require no additional infrastructure other than the telephone network and the modems and servers needed to make and answer the calls. Because telephone access is widely available, dial-up is often the only choice available for rural or remote areas, where broadband installations are not prevalent due to low population density and high infrastructure cost. A 2008 Pew Research Center study stated that only 10% of US adults still used dial-up Internet access. The study found that the most common reason for retaining dial-up access was high broadband prices. Users cited lack of infrastructure as a reason less often than stating that they would never upgrade to broadband. That number had fallen to 6% by 2010, and to 3% by 2013. A survey conducted in 2018 estimated that 0.3% of Americans were using dial-up by 2017. The CRTC estimated that there were 336,000 Canadian dial-up users in 2010. Replacement by broadband Broadband Internet access via cable, digital subscriber line, wireless broadband, mobile broadband, satellite and FTTx has replaced dial-up access in many parts of the world. Broadband connections typically offer speeds of 700 kbit/s or higher for two-thirds more than the price of dial-up on average. In addition, broadband connections are always on, thus avoiding the need to connect and disconnect at the start and end of each session. Broadband does not require the exclusive use of a phone line, and thus one can access the Internet and at the same time make and receive voice phone calls without having a second phone line. However, many rural areas remain without high-speed Internet, despite the eagerness of potential customers. This can be attributed to population, location, or sometimes ISPs' lack of interest due to little chance of profitability and high costs to build the required infrastructure. Some dial-up ISPs have responded to the increased competition by lowering their rates and making dial-up an attractive option for those who merely want email access or basic Web browsing. Dial-up has seen a significant fall in usage, with the potential to cease to exist in future as more users switch to broadband. In 2013, only about 3% of the U.S population used dial-up, compared to 30% in 2000. One contributing factor is the bandwidth requirements of newer computer programs, like operating systems and antivirus software, which automatically download sizeable updates in the background when a connection to the Internet is first made. These background downloads can take several minutes or longer and, until all updates are completed, they can severely impact the amount of bandwidth available to other applications like Web browsers. Since an "always on" broadband is the norm expected by most newer applications being developed, this automatic background downloading trend is expected to continue to eat away at dial-up's available bandwidth to the detriment of dial-up users' applications. Many newer websites also now assume broadband speeds as the norm, and when connected to with slower dial-up speeds may drop (timeout) these slower connections to free up communication resources. On websites that are designed to be more dial-up friendly, use of a reverse proxy prevents dial-ups from being dropped as often but can introduce long wait periods for dial-up users caused by the buffering used by a reverse proxy to bridge the different data rates. Despite the rapid decline, dial-up Internet still exists in some rural areas, and many areas of developing and underdeveloped nations, although wireless and satellite broadband are providing faster connections in many rural areas where fibre or copper may be uneconomical. In 2010, it was estimated that there were 800,000 dial-up users in the UK. BT turned off its dial-up service in 2013. In 2012, it was estimated that 7% of Internet connections in New Zealand were dial-up. One NZ (formerly Vodafone) turned off its dial-up service in 2021. Performance Modern dial-up modems typically have a maximum theoretical transfer speed of 56 kbit/s (using the V.90 or V.92 protocol), although in most cases, 40–50 kbit/s is the norm. Factors such as phone line noise as well as the quality of the modem itself play a large part in determining connection speeds. Some connections may be as low as 20 kbit/s in extremely noisy environments, such as in a hotel room where the phone line is shared with many extensions, or in a rural area, many kilometres from the phone exchange. Other factors such as long loops, loading coils, pair gain, electric fences (usually in rural locations), and digital loop carriers can also slow connections to 20 kbit/s or lower. Note that the values given are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines). Analog telephone lines are digitally switched and transported inside a Digital Signal 0 once reaching the telephone company's equipment. Digital Signal 0 is 64 kbit/s and reserves 8 kbit/s for signaling information; therefore a 56 kbit/s connection is the highest that will ever be possible with analog phone lines. Dial-up connections usually have latency as high as 150 ms or even more, higher than many forms of broadband, such as cable or DSL, but typically less than satellite connections. Longer latency can make video conferencing and online gaming difficult, if not impossible. An increasing amount of Internet content such as streaming media will not work at dial-up speeds. Video games released from the mid-1990s to the mid-2000s that utilized Internet access such as EverQuest, Red Faction, Warcraft 3, Final Fantasy XI, Phantasy Star Online, Guild Wars, Unreal Tournament, Halo: Combat Evolved, Audition, Quake 3: Arena, Starsiege: Tribes and Ragnarok Online, etc., accommodated for 56k dial-up with limited data transfer between the game servers and user's personal computer. The first consoles to provide Internet connectivity, the Dreamcast and PlayStation 2, supported dial-up as well as broadband. The GameCube could use dial-up and broadband connections, but this was used in very few games and required a separate adapter. The original Xbox exclusively required a broadband connection. Many computer and video games released since 2006 do not even include the option to use dial-up. However, there are exceptions to this, such as Vendetta Online, which can still run on a dial-up modem. Using compression to exceed 56k The V.42, V.42bis and V.44 standards allow modems to accept compressed data at a rate faster than the line rate. These algorithms use data compression to achieve higher throughput. For instance, a 53.3 kbit/s connection with V.44 can transmit up to 53.3 × 6 = 320 kbit/s if the offered data stream can be compressed that much. However, the compression ratio varies considerably. ZIP archives, JPEG images, MP3, video, etc. are already compressed. A modem might be sending compressed files at approximately 50 kbit/s, uncompressed files at 160 kbit/s, and pure text at 320 kbit/s, or any rate in this range. Compression by the ISP As telephone-based Internet lost popularity by the mid-2000s, some Internet service providers such as TurboUSA, Netscape, CdotFree, and NetZero started using data compression to increase the perceived speed. As an example, EarthLink advertises "surf the Web up to 7x faster" using a compression program on images, text/html, and SWF flash animations prior to transmission across the phone line. The pre-compression operates much more efficiently than the on-the-fly compression of V.44 modems. Typically, website text is compacted to 5%, thus increasing effective throughput to approximately 1000 kbit/s, and JPEG/GIF/PNG images are lossy-compressed to 15–20%, increasing effective throughput up to 300 kbit/s. The drawback of this approach is a loss in quality, where the graphics acquire compression artifacts taking on a blurry or colorless appearance. However, the transfer speed is dramatically improved. If desired, the user may choose to view uncompressed images instead, but at a much slower load rate. Since streaming music and video are already compressed at the source, they are typically passed by the ISP unaltered. Usage in other devices Other devices, such as satellite receivers and digital video recorders (such as TiVo), have also used a dial-up connection using a household phone socket. This connection allowed to download data at request and to report usage (e.g. ordering pay-per-view) to the service provider. This feature did not require an Internet service provider account – instead, the device's internal modem dialed the server of the service provider directly. These devices may experience difficulties when operating on a VoIP line because the compression could alter the modem signal. Later, these devices moved to using an Ethernet connection to the user's Internet router, which became a more convenient approach due to the growth in popularity of broadband. See also Registered jack Ascend Communications made equipment for Dial-Up ISPs References American inventions Internet access Web 1.0 Obsolete technologies
Dial-up Internet access
[ "Technology" ]
2,503
[ "Internet access", "IT infrastructure" ]
59,561
https://en.wikipedia.org/wiki/Enterprise%20Objects%20Framework
The Enterprise Objects Framework, or simply EOF, was introduced by NeXT in 1994 as a pioneering object-relational mapping product for its NeXTSTEP and OpenStep development platforms. EOF abstracts the process of interacting with a relational database by mapping database rows to Java or Objective-C objects. This largely relieves developers from writing low-level SQL code. EOF enjoyed some niche success in the mid-1990s among financial institutions who were attracted to the rapid application development advantages of NeXT's object-oriented platform. Since Apple Inc's merger with NeXT in 1996, EOF has evolved into a fully integrated part of WebObjects, an application server also originally from NeXT. Many of the core concepts of EOF re-emerged as part of Core Data, which further abstracts the underlying data formats to allow it to be based on non-SQL stores. History In the early 1990s NeXT Computer recognized that connecting to databases was essential to most businesses and yet also potentially complex. Every data source has a different data-access language (or API), driving up the costs to learn and use each vendor's product. The NeXT engineers wanted to apply the advantages of object-oriented programming, by getting objects to "talk" to relational databases. As the two technologies are very different, the solution was to create an abstraction layer, insulating developers from writing the low-level procedural code (SQL) specific to each data source. The first attempt came in 1992 with the release of Database Kit (DBKit), which wrapped an object-oriented framework around any database. Unfortunately, NEXTSTEP at the time was not powerful enough and DBKit had serious design flaws. NeXT's second attempt came in 1994 with the Enterprise Objects Framework (EOF) version 1, a complete rewrite that was far more modular and OpenStep compatible. EOF 1.0 was the first product released by NeXT using the Foundation Kit and introduced autoreleased objects to the developer community. The development team at the time was only four people: Jack Greenfield, Rich Williamson, Linus Upson and Dan Willhite. EOF 2.0, released in late 1995, further refined the architecture, introducing the editing context. At that point, the development team consisted of Dan Willhite, Craig Federighi, Eric Noyau and Charly Kleissner. EOF achieved a modest level of popularity in the financial programming community in the mid-1990s, but it would come into its own with the emergence of the World Wide Web and the concept of web applications. It was clear that EOF could help companies plug their legacy databases into the Web without any rewriting of that data. With the addition of frameworks to do state management, load balancing and dynamic HTML generation, NeXT was able to launch the first object-oriented Web application server, WebObjects, in 1996, with EOF at its core. In 2000, Apple Inc. (which had merged with NeXT) officially dropped EOF as a standalone product, meaning that developers would be unable to use it to create desktop applications for the forthcoming Mac OS X. It would, however, continue to be an integral part of a major new release of WebObjects. WebObjects 5, released in 2001, was significant for the fact that its frameworks had been ported from their native Objective-C programming language to the Java language. Critics of this change argue that most of the power of EOF was a side effect of its Objective-C roots, and that EOF lost the beauty or simplicity it once had. Third-party tools, such as EOGenerator, help fill the deficiencies introduced by Java (mainly due to the loss of categories). The Objective-C code base was re-introduced with some modifications to desktop application developers as Core Data, part of Apple's Cocoa API, with the release of Mac OS X Tiger in April 2005. How EOF works Enterprise Objects provides tools and frameworks for object-relational mapping. The technology specializes in providing mechanisms to retrieve data from various data sources, such as relational databases via JDBC and JNDI directories, and mechanisms to commit data back to those data sources. These mechanisms are designed in a layered, abstract approach that allows developers to think about data retrieval and commitment at a higher level than a specific data source or data source vendor. Central to this mapping is a model file (an "EOModel") that you build with a visual tool — either EOModeler, or the EOModeler plug-in to Xcode. The mapping works as follows: Database tables are mapped to classes. Database columns are mapped to class attributes. Database rows are mapped to objects (or class instances). You can build data models based on existing data sources or you can build data models from scratch, which you then use to create data structures (tables, columns, joins) in a data source. The result is that database records can be transposed into Java objects. The advantage of using data models is that applications are isolated from the idiosyncrasies of the data sources they access. This separation of an application's business logic from database logic allows developers to change the database an application accesses without needing to change the application. EOF provides a level of database transparency not seen in other tools and allows the same model to be used to access different vendor databases and even allows relationships across different vendor databases without changing source code. Its power comes from exposing the underlying data sources as managed graphs of persistent objects. In simple terms, this means that it organizes the application's model layer into a set of defined in-memory data objects. It then tracks changes to these objects and can reverse those changes on demand, such as when a user performs an undo command. Then, when it is time to save changes to the application's data, it archives the objects to the underlying data sources. Using Inheritance In designing Enterprise Objects developers can leverage the object-oriented feature known as inheritance. A Customer object and an Employee object, for example, might both inherit certain characteristics from a more generic Person object, such as name, address, and phone number. While this kind of thinking is inherent in object-oriented design, relational databases have no explicit support for inheritance. However, using Enterprise Objects, you can build data models that reflect object hierarchies. That is, you can design database tables to support inheritance by also designing enterprise objects that map to multiple tables or particular views of a database table. Enterprise Objects (EOs) An Enterprise Object is analogous to what is often known in object-oriented programming as a business object — a class which models a physical or conceptual object in the business domain (e.g. a customer, an order, an item, etc.). What makes an EO different from other objects is that its instance data maps to a data store. Typically, an enterprise object contains key-value pairs that represent a row in a relational database. The key is basically the column name, and the value is what was in that row in the database. So it can be said that an EO's properties persist beyond the life of any particular running application. More precisely, an Enterprise Object is an instance of a class that implements the com.webobjects.eocontrol.EOEnterpriseObject interface. An Enterprise Object has a corresponding model (called an EOModel) that defines the mapping between the class's object model and the database schema. However, an enterprise object doesn't explicitly know about its model. This level of abstraction means that database vendors can be switched without it affecting the developer's code. This gives Enterprise Objects a high degree of reusability. EOF and Core Data Despite their common origins, the two technologies diverged, with each technology retaining a subset of the features of the original Objective-C code base, while adding some new features. Features Supported Only by EOF EOF supports custom SQL; shared editing contexts; nested editing contexts; and pre-fetching and batch faulting of relationships, all features of the original Objective-C implementation not supported by Core Data. Core Data also does not provide the equivalent of an EOModelGroup—the NSManagedObjectModel class provides methods for merging models from existing models, and for retrieving merged models from bundles. Features Supported Only by Core Data Core Data supports fetched properties; multiple configurations within a managed object model; local stores; and store aggregation (the data for a given entity may be spread across multiple stores); customization and localization of property names and validation warnings; and the use of predicates for property validation. These features of the original Objective-C implementation are not supported by the Java implementation. External links article in linuxjournal about GDL2 References Data management NeXT Apple Inc. software
Enterprise Objects Framework
[ "Technology" ]
1,809
[ "Data management", "Data" ]
59,574
https://en.wikipedia.org/wiki/Biproduct
In category theory and its applications to mathematics, a biproduct of a finite collection of objects, in a category with zero objects, is both a product and a coproduct. In a preadditive category the notions of product and coproduct coincide for finite collections of objects. The biproduct is a generalization of finite direct sums of modules. Definition Let C be a category with zero morphisms. Given a finite (possibly empty) collection of objects A1, ..., An in C, their biproduct is an object in C together with morphisms in C (the projection morphisms) (the embedding morphisms) satisfying , the identity morphism of and , the zero morphism for and such that is a product for the and is a coproduct for the If C is preadditive and the first two conditions hold, then each of the last two conditions is equivalent to when n > 0. An empty, or nullary, product is always a terminal object in the category, and the empty coproduct is always an initial object in the category. Thus an empty, or nullary, biproduct is always a zero object. Examples In the category of abelian groups, biproducts always exist and are given by the direct sum. The zero object is the trivial group. Similarly, biproducts exist in the category of vector spaces over a field. The biproduct is again the direct sum, and the zero object is the trivial vector space. More generally, biproducts exist in the category of modules over a ring. On the other hand, biproducts do not exist in the category of groups. Here, the product is the direct product, but the coproduct is the free product. Also, biproducts do not exist in the category of sets. For, the product is given by the Cartesian product, whereas the coproduct is given by the disjoint union. This category does not have a zero object. Block matrix algebra relies upon biproducts in categories of matrices. Properties If the biproduct exists for all pairs of objects A and B in the category C, and C has a zero object, then all finite biproducts exist, making C both a Cartesian monoidal category and a co-Cartesian monoidal category. If the product and coproduct both exist for some pair of objects A1, A2 then there is a unique morphism such that for It follows that the biproduct exists if and only if f is an isomorphism. If C is a preadditive category, then every finite product is a biproduct, and every finite coproduct is a biproduct. For example, if exists, then there are unique morphisms such that for To see that is now also a coproduct, and hence a biproduct, suppose we have morphisms for some object . Define Then is a morphism from to , and for . In this case we always have An additive category is a preadditive category in which all finite biproducts exist. In particular, biproducts always exist in abelian categories. References Additive categories Limits (category theory)
Biproduct
[ "Mathematics" ]
694
[ "Mathematical structures", "Category theory", "Limits (category theory)", "Additive categories" ]
59,587
https://en.wikipedia.org/wiki/Anti-lock%20braking%20system
An anti-lock braking system (ABS) is a safety anti-skid braking system used on aircraft and on land vehicles, such as cars, motorcycles, trucks, and buses. ABS operates by preventing the wheels from locking up during braking, thereby maintaining tractive contact with the road surface and allowing the driver to maintain more control over the vehicle. ABS is an automated system that uses the principles of threshold braking and cadence braking, techniques which were once practiced by skillful drivers before ABS was widespread. ABS operates at a much faster rate and more effectively than most drivers could manage. Although ABS generally offers improved vehicle control and decreases stopping distances on dry and some slippery surfaces, on loose gravel or snow-covered surfaces ABS may significantly increase braking distance, while still improving steering control. Since ABS was introduced in production vehicles, such systems have become increasingly sophisticated and effective. Modern versions may not only prevent wheel lock under braking, but may also alter the front-to-rear brake bias. This latter function, depending on its specific capabilities and implementation, is known variously as electronic brakeforce distribution, traction control system, emergency brake assist, or electronic stability control (ESC). History Early systems The concept for ABS predates the modern systems that were introduced in the 1950s. In 1908, for example, J.E. Francis introduced his 'Slip Prevention Regulator for Rail Vehicles'. In 1920 the French automobile and aircraft pioneer Gabriel Voisin experimented with systems that modulated the hydraulic braking pressure on his aircraft brakes to reduce the risk of tire slippage, as threshold braking on aircraft is nearly impossible. These systems used a flywheel and valve attached to a hydraulic line that feeds the brake cylinders. The flywheel is attached to a drum that runs at the same speed as the wheel. In normal braking, the drum and flywheel should spin at the same speed. However, when a wheel slows down, then the drum would do the same, leaving the flywheel spinning at a faster rate. This causes the valve to open, allowing a small amount of brake fluid to bypass the master cylinder into a local reservoir, lowering the pressure on the cylinder and releasing the brakes. The use of the drum and flywheel meant the valve only opened when the wheel was turning. In testing, a 30% improvement in braking performance was noted, because the pilots immediately applied full brakes instead of slowly increasing pressure in order to find the skid point. An additional benefit was the elimination of burned or burst tires. The first proper recognition of the ABS system came later with the German engineer Karl Wässel, whose system for modulating braking power was officially patented in 1928. Wässel, however, never developed a working product and neither did Robert Bosch who produced a similar patent eight years later. A similar braking system called Decelostat that used direct-current generators to measure wheel slippage was used in railroads in the 1930s. By 1951, flywheel-based Decelostat was used in aircraft to provide anti skid in landings. The device was on trials first in the United States and later by the British. In 1954, Popular Science revealed that there was preliminary testing of the Decelostat system to prevent car swirling on a heavy brake by the US car manufacturers in Detroit. However, there was no public information of the test results. By the early 1950s, the Dunlop Maxaret anti-skid system was in widespread aviation use in the UK, with aircraft such as the Avro Vulcan and Handley Page Victor, Vickers Viscount, Vickers Valiant, English Electric Lightning, de Havilland Comet 2c, de Havilland Sea Vixen, and later aircraft, such as the Vickers VC10, Hawker Siddeley Trident, Hawker Siddeley 125, Hawker Siddeley HS 748 and derived British Aerospace ATP, and BAC One-Eleven, and the Dutch Fokker F27 Friendship (which unusually had a Dunlop high pressure (200 Bar) pneumatic system in lieu of hydraulics for braking, nose wheel steering and landing gear retraction), being fitted with Maxaret as standard. Maxaret, while reducing braking distances by up to 30% in icy or wet conditions, also increased tire life, and had the additional advantage of allowing take-offs and landings in conditions that would preclude flying at all in non-Maxaret equipped aircraft. In 1958, a Royal Enfield Super Meteor motorcycle was used by the Road Research Laboratory to test the Maxaret anti-lock brake. The experiments demonstrated that anti-lock brakes can be of great value to motorcycles, for which skidding is involved in a high proportion of accidents. Stopping distances were reduced in most of the tests compared with locked wheel braking, particularly on slippery surfaces, in which the improvement could be as much as 30%. Enfield's technical director at the time, Tony Wilson-Jones, saw little future in the system, however, and it was not put into production by the company. A fully-mechanical system saw limited automobile use in the 1960s in the Ferguson P99 racing car, the Jensen FF, and the experimental all-wheel drive Ford Zodiac, but saw no further use; the system proved expensive and unreliable. The first fully-electronic anti-lock braking system was developed in the late-1960s for the Concorde aircraft. The modern ABS system was invented in 1971 by Mario Palazzetti (known as 'Mister ABS') in the Fiat Research Center and has become standard in almost every car. The system was called Antiskid and the patent was sold to Bosch who named it ABS. Modern systems Chrysler, together with the Bendix Corporation, introduced a computerized, three-channel, four-sensor all-wheel ABS called "Sure Brake" for its 1971 Imperial. It was available for several years thereafter, functioned as intended, and proved reliable. In 1969, Ford introduced an anti-lock braking system called "Sure-Track" to the rear wheels of the Lincoln Continental Mark III and Ford Thunderbird, as an option; it became standard in 1971. The Sure-Track braking system was designed with help from Kelsey-Hayes. In 1971, General Motors introduced the "Trackmaster" rear-wheel only ABS as an option on their rear-wheel drive Cadillac models and called the option the True-Track Braking System on the Oldsmobile Toronado. In 1972, the option was made available in all Cadillacs. In 1971, Nissan offered an EAL (Electro Anti-lock System) developed by Japanese company Denso as an option on the Nissan President, which became Japan's first electronic ABS. 1971: The Imperial became the first production car with a 4 wheel computer-operated anti-lock braking system. Toyota introduced electronically controlled anti-skid brakes on Toyota Crown labeled as ESC (Electronic Skid Control). 1971: First truck application: "Antislittamento" system developed by Fiat Veicoli Industriali and installed on Fiat truck model 691N1. 1972: four-wheel-drive Triumph 2500 Estates were fitted with Mullard electronic systems as standard. Such cars were rare however and very few remain. 1976: WABCO began the development of the anti-locking braking system on commercial vehicles to prevent locking on slippery roads, followed in 1986 by the electronic braking system (EBS) for heavy-duty vehicles. 1978: Mercedes-Benz W116 As one of the firsts, used an electronic four-wheel multi-channel anti-lock braking system (ABS) from Bosch as an option from 1978 on. 1982: Honda introduced electronically controlled multi-channel ALB (Anti Locking Brakes) as an option for the second generation of Prelude, launched worldwide in 1982. Additional info: the general agent for Honda in Norway required all Preludes for the Norwegian market to have the ALB-system as a standard feature, making Honda Prelude be the first car delivered in Europe with ABS as a standard feature. The Norwegian general agent also included a sunroof and other options to be standard equipment in Norway, adding more luxury to the Honda brand. However, the Norwegian tax system made the well-equipped car very expensive, and the sales suffered from high costs. From 1984 the ALB-system, as well as the other optional features from Honda, was no longer a standard feature in Norway. In 1985 the Ford Scorpio was introduced to the European market with a Teves electronic system throughout the range as standard. For this the model was awarded the coveted European Car of the Year Award in 1986, with very favorable praise from motoring journalists. After this success, Ford began research into Anti-Lock systems for the rest of their range, which encouraged other manufacturers to follow suit. Since 1987 ABS has been standard equipment on all Mercedes-Benz automobiles. Lincoln followed suit in 1993. In 1988, BMW introduced the first motorcycle with an electro-hydraulic ABS: the BMW K100. Yamaha Introduced the FJ1200 model with optional ABS in 1991. Honda followed suit in 1992 with the launch of its first motorcycle ABS on the ST1100 Pan European. In 2007, Suzuki launched its GSF1200SA (Bandit) with an ABS. In 2005, Harley-Davidson began offering an ABS option on police bikes. Operation The anti-lock brake controller is also known as the CAB (Controller Anti-lock Brake). Typically ABS includes a central electronic control unit (ECU), four wheel speed sensors, and at least two hydraulic valves within the brake hydraulics. The ECU constantly monitors the rotational speed of each wheel; if it detects the wheel rotating significantly slower than the speed of the vehicle, a condition indicative of impending wheel lock, it actuates the valves to reduce hydraulic pressure to the brake at the affected wheel, thus reducing the braking force on that wheel; the wheel then turns faster. Conversely, if the ECU detects a wheel turning significantly faster than the others, brake hydraulic pressure to the wheel is increased so the braking force is reapplied, slowing down the wheel. This process is repeated continuously and can be detected by the driver via brake pedal pulsation. Some anti-lock systems can apply or release braking pressure 15 times per second. Because of this, the wheels of cars equipped with ABS are practically impossible to lock even during panic braking in extreme conditions. The ECU is programmed to disregard differences in wheel rotative speed below a critical threshold because when the car is turning, the two wheels towards the center of the curve turn slower than the outer two. For this same reason, a differential is used in virtually all roadgoing vehicles. If a fault develops in any part of the ABS, a warning light will usually be illuminated on the vehicle instrument panel, and the ABS will be disabled until the fault is rectified. Modern ABS applies individual brake pressure to all four wheels through a control system of hub-mounted sensors and a dedicated micro-controller. ABS is offered or comes standard on most road vehicles and is the foundation for electronic stability control systems, which are rapidly increasing in popularity due to the great reduction in the price of vehicle electronics over the years. Modern electronic stability control (ESC) systems are an evolution of the ABS concept. Here, a minimum of two additional sensors are added to help the system work: these are a steering wheel angle sensor and a gyroscopic sensor. The theory of operation is simple: when the gyroscopic sensor detects that the direction taken by the car does not coincide with what the steering wheel sensor reports, the ESC software will brake the necessary individual wheel(s) (up to three with the most sophisticated systems), so that the vehicle goes the way the driver intends. The steering wheel sensor also helps in the operation of Cornering Brake Control (CBC), since this will tell the ABS that wheels on the inside of the curve should brake more than wheels on the outside, and by how much. ABS equipment may also be used to implement a traction control system (TCS) on the acceleration of the vehicle. If, when accelerating, the tire loses traction, the ABS controller can detect the situation and take suitable action so that traction is regained. More sophisticated versions of this can also control throttle levels and brakes simultaneously. The speed sensors of ABS are sometimes used in indirect tire pressure monitoring system (TPMS), which can detect under-inflation of the tire(s) by the difference in the rotational speed of wheels. Components There are four main components of ABS: wheel speed sensors, valves, a pump, and a controller. Speed sensors (Encoders) A speed sensor is used to determine the acceleration or deceleration of the wheel. These sensors use a magnet and a Hall effect sensor, or a toothed wheel and an electromagnetic coil to generate a signal. The rotation of the wheel or differential induces a magnetic field around the sensor. The fluctuations of this magnetic field generate a voltage in the sensor. Since the voltage induced in the sensor is a result of the rotating wheel, this sensor can become inaccurate at slow speeds. The slower rotation of the wheel can cause inaccurate fluctuations in the magnetic field and thus cause inaccurate readings to the controller. Valves There is a valve in the brake line of each brake controlled by the ABS. On some systems, the valve has three positions: In position one, the valve is open; pressure from the master cylinder is passed right through to the brake. In position two, the valve blocks the line, isolating that brake from the master cylinder. This prevents the pressure from rising further should the driver push the brake pedal harder. In position three, the valve releases some of the pressure from the brake. The majority of problems with the valve system occur due to clogged valves. When a valve is clogged it is unable to open, close, or change position. An inoperable valve will prevent the system from modulating the valves and controlling pressure supplied to the brakes. Pump The pump in the ABS is used to restore the pressure to the hydraulic brakes after the valves have released it. A signal from the controller will release the valve at the detection of wheel slip. After a valve releases the pressure supplied from the user, the pump is used to restore the desired amount of pressure to the braking system. The controller will modulate the pump's status in order to provide the desired amount of pressure and reduce slipping. Controller The controller is an ECU type unit in the car which receives information from each individual wheel speed sensor. If a wheel loses traction, the signal is sent to the controller. The controller will then limit the brake force (EBD) and activate the ABS modulator which actuates the braking valves on and off. Use There are many different variations and control algorithms for use in ABS. One of the simpler systems works as follows: The controller monitors the speed sensors at all times. It is looking for decelerations in the wheel that are out of the ordinary. Right before a wheel locks up, it will experience a rapid deceleration. If left unchecked, the wheel would stop much more quickly than any car could. It might take a car two to four seconds to stop from 60 mph (96.6 km/h) under ideal conditions, but a wheel that locks up could stop spinning in less than a second. The ABS controller knows that such a rapid deceleration of the car is impossible (and in actuality the rapid deceleration means the wheel is about to slip), so it reduces the pressure to that brake until it sees an acceleration, then it increases the pressure until it sees the deceleration again. It can do this very quickly before the wheel can actually significantly change speed. The result is that the wheel slows down at the same rate as the car, with the brakes keeping the wheels very near the point at which they will start to lock up. This gives the system maximum braking power. This replaces the need to manually pump the brakes while driving on a slippery or a low traction surface, allowing to steer even in most emergency braking conditions. When the ABS is in operation the driver will feel a pulsing in the brake pedal; this comes from the rapid opening and closing of the valves. This pulsing also tells the driver that the ABS has been triggered. Brake types Anti-lock braking systems use different schemes depending on the type of brakes in use. They can be differentiated by the number of channels: that is, how many valves that are individually controlled—and the number of speed sensors. 1) Four-channel, four-sensor ABS There is a speed sensor on all four wheels and a separate valve for all four wheels. With this setup, the controller monitors each wheel individually to make sure it is achieving maximum braking force. 2) Three-channel, four-sensor ABS There is a speed sensor on all four wheels and a separate valve for each of the front wheels, but only one valve for both of the rear wheels. Older vehicles with four-wheel ABS usually use this type. 3) Three-channel, three-sensor ABS This scheme, commonly found on pickup trucks with four-wheel ABS, has a speed sensor and a valve for each of the front wheels, with one valve and one sensor for both rear wheels. The speed sensor for the rear wheels is located in the rear axle. This system provides individual control of the front wheels, so they can both achieve maximum braking force. The rear wheels, however, are monitored together; they both have to start to lock up before the ABS will activate on the rear. With this system, it is possible that one of the rear wheels will lock during a stop, reducing brake effectiveness. This system is easy to identify, as there are no individual speed sensors for the rear wheels. 4) Two-channel, four-sensor ABS This system, commonly found on passenger cars from the late '80s through the mid-1990s, uses a speed sensor at each wheel, with one control valve each for the front and rear wheels as a pair. If the speed sensor detects lock up at any individual wheel, the control module pulses the valve for both wheels on that end of the car. 5) One-channel, one-sensor ABS This system is commonly found on pickup trucks, SUVs, and vans with rear-wheel ABS. It has one valve, which controls both rear wheels, and a one-speed sensor, located in the rear axle. This system operates the same as the rear end of a three-channel system. The rear wheels are monitored together and they both have to start to lock up before the ABS kicks in. In this system it is also possible that one of the rear wheels will lock, reducing brake effectiveness. This system is also easy to identify, as there are no individual speed sensors for any of the wheels. Effectiveness A 2004 Australian study by Monash University Accident Research Centre found that ABS: Reduced the risk of multiple vehicle crashes by 18 percent, Increased the risk of run-off-road crashes by 35 percent. On high-traction surfaces such as bitumen, or concrete, many (though not all) ABS-equipped cars are able to attain braking distances better (i.e. shorter) than those that would be possible without the benefit of ABS. In real-world conditions, even an alert and experienced driver without ABS would find it difficult to match or improve on the performance of a typical driver with a modern ABS-equipped vehicle. ABS reduces the chances of crashing, and/or the severity of impact. The recommended technique for non-expert drivers in an ABS-equipped car, in a typical full-braking emergency, is to press the brake pedal as firmly as possible and, where appropriate, to steer around obstructions. In such situations, ABS will significantly reduce the chances of a skid and subsequent loss of control. In gravel, sand, and deep snow, ABS tends to increase braking distances. On these surfaces, locked wheels dig in and stop the vehicle more quickly. ABS prevents this from occurring. Some ABS calibrations reduce this problem by slowing the cycling time, thus letting the wheels repeatedly briefly lock and unlock. Some vehicle manufacturers provide an "off-road" button to turn the ABS function off. The primary benefit of ABS on such surfaces is to increase the ability of the driver to maintain control of the car rather than go into a skid, though the loss of control remains more likely on soft surfaces such as gravel or on slippery surfaces such as snow or ice. On a very slippery surface such as sheet ice or gravel, it is possible to lock multiple wheels at once, and this can defeat ABS (which relies on comparing all four wheels and detecting individual wheels skidding). The availability of ABS relieves most drivers from learning threshold braking. A June 1999 National Highway Traffic Safety Administration (NHTSA) study found that ABS increased stopping distances on loose gravel by an average of 27.2 percent. According to the NHTSA, "ABS works with your regular braking system by automatically pumping them. In vehicles not equipped with ABS, the driver has to manually pump the brakes to prevent wheel lockup. In vehicles equipped with ABS, your foot should remain firmly planted on the brake pedal, while ABS pumps the brakes for you so you can concentrate on steering to safety." When activated, some earlier ABSes caused the brake pedal to pulse noticeably. As most drivers rarely or do not brake hard enough to cause brake lock-up, and drivers typically do not read the vehicle's owner's manual, this may not be noticeable until an emergency. Some manufacturers have therefore implemented a brake assist system that determines that the driver is attempting a "panic stop" (by detecting that the brake pedal was depressed very quickly, unlike a normal stop where the pedal pressure would usually be gradually increased. Some systems additionally monitor the rate at the accelerator was released, and/or the time between accelerator release and brake application) and the system automatically increases braking force where not enough pressure is applied. Hard or panic braking on bumpy surfaces, because of the bumps causing the speed of the wheel(s) to become erratic may also trigger the ABS, sometimes causing the system to enter its ice mode, where the system severely limits maximum available braking power. Nevertheless, ABS significantly improves safety and control for drivers in most on-road situations. Anti-lock brakes are the subject of some experiments centred around risk compensation theory, which asserts that drivers adapt to the safety benefit of ABS by driving more aggressively. In a Munich study, half a fleet of taxicabs was equipped with anti-lock brakes, while the other half had conventional brake systems. The crash rate was substantially the same for both types of cab, and Wilde concludes this was due to drivers of ABS-equipped cabs taking more risks, assuming that ABS would take care of them, while the non-ABS drivers drove more carefully since ABS would not be there to help in case of a dangerous situation. The Insurance Institute for Highway Safety released a study in 2010 that found motorcycles with ABS 37% less likely to be involved in a fatal crash than models without ABS. ABS on motorcycles On a motorcycle, an anti-lock brake system prevents the wheels of a motorcycle from locking during braking situations. Based on information from wheel speed sensors the ABS unit adjusts the pressure of the brake fluid in order to keep traction during deceleration to avoid accidents. Motorcycle ABS helps the rider to maintain stability during braking and to decrease the stopping distance. It provides traction even on low friction surfaces. While older ABS models are derived from cars, recent Motorcycle ABS is the result of research, oriented to the specifics of motorcycles in case of size, weight, and functionality. National and international organizations have evaluated Motorcycle ABS to be an important factor in increasing safety and reducing the number and severity of motorcycle crashes and collisions. The European Commission passed legislation in 2012 that made the fitment with ABS for all new motorcycles above 125cc to be mandatory from 1 January 2016. Consumer Reports said in 2016 that "ABS is commonly offered on large, expensive models, but it has been spreading to several entry-level sportbikes and midsized bikes". History of motorcycle ABS In 1988, BMW introduced an electronic/hydraulic ABS for motorcycles, ten years after Daimler Benz and Bosch released the first four-wheel vehicle ABS for series production. Motorcycles of BMW K100 series were optionally equipped with the ABS, which added 11 kg to the bike. It was developed together with FAG Kugelfischer and regulated the pressure in the braking circuits via a plunger piston. Japanese manufacturers followed with an ABS option by 1992 on the Honda ST1100 and the Yamaha FJ1200. Continental presented its first Motorcycle Integral ABS (MIB) in 2006. It has been developed in cooperation with BMW and weighed 2.3 kg. While the first generation of motorcycle ABS weighed around 11 kg, the generation (2011) presented by Bosch in 2009 weighs 0.7 kg (ABS base) and 1.6 kg (ABS enhanced) with integral braking. Basic principle Wheel speed sensors mounted on the front and rear wheel constantly measures the rotational speed of each wheel and delivers this information to an Electronic Control Unit (ECU). The ECU detects two things: 1) if the deceleration of one wheel exceeds a fixed threshold and 2) whether the brake slip, calculated based on information of both wheels, rises above a certain percentage and enters an unstable zone. These are indicators for a high possibility of a locking wheel. To countermeasure these irregularities the ECU signals the hydraulic unit to hold or to release pressure. After signals show the return to the stable zone, the pressure is increased again. Past models used a piston for the control of the fluid pressure. Most recent models regulate the pressure by rapidly opening and closing solenoid valves. While the basic principle and architecture has been carried over from passenger car ABS, typical motorcycle characteristics have to be considered during the development and application processes. One characteristic is the change of the dynamic wheel load during braking. Compared to cars, the wheel load changes are more drastic, which can lead to a wheel lift up and a fall over. This can be intensified by a soft suspension. Some systems are equipped with a rear-wheel lift-off mitigation functionality. When the indicators of a possible rear lift-off are detected, the system releases brake pressure on the front wheel to counter this behavior. Another difference is that in the case of the motorcycle the front wheel is much more important for stability than the rear wheel. If the front wheel locks up between 0.2-0.7s, it loses gyrostatic forces and the motorcycle starts to oscillate because of the increased influence of side forces operating on the wheel contact line. The motorcycle becomes unstable and falls. Anti-lock Braking System (ABS) Piston Systems: The pressure release in this system is realized through the movement of a spring-tensioned piston. When pressure should be released, a linear motor pulls back the plunger piston and opens up more space for the fluid. The system was used for example in the ABS I (1988) and ABS II (1993) of BMW. The ABS II differed in size and an electronically controlled friction clutch was mounted on the shaft instead of a plunger. Further displacement sensors record the travel distance of the piston to allow the control unit a more precise regulation. Honda also uses this system of pressure modulation for big sports and touring bikes. Valve and Pump Systems: The main parts which are part of the pressure modulation system are solenoid inlet and outlet valves, a pump, motor, and accumulators/reservoirs. The number of the valves differs from model to model due to additional functionalities and the number of brake channels. Based on the input of the ECU, coils operate the inlet and outlet valves. During pressure release, the brake fluid is stored in accumulators. In this open system approach, the fluid is then brought back in the brake circuit via a pump operated by a motor that is felt through pulsation on the brake lever. Regenerative Anti-Lock Braking for Electric 2-wheel vehicles (eABS) Electric vehicles can recapture the energy from rear wheel braking. Combined Braking System (CBS) Contrary to how the wheels on cars and trains react collectively to brakes when applied, on motorcycles the rear wheel brake and front wheel brake are controlled separately. If the rider only brakes with one wheel, this braked wheel tends to lock up faster than if both brakes had been applied. A Combined Braking System therefore distributes the brake force also to the non-braked wheel to lower the possibility of a lock-up, increase deceleration and reduce suspension pitch. With a single [rear] CBS the brake pressure applied on the rear brake (pedal) is simultaneously distributed to the front wheel. A delay valve cuts the hydraulic pressure to assure that only when strong braking is applied, the pressure is also created at the front wheel. Honda's first street motorcycle with a combined braking system (then called Unified Braking) was the 1983 GL1100. This system was derived from the 1970s RCB1000 world endurance race bike. Larger models with two front discs use a dual CBS System. The system was first installed by Moto Guzzi in 1975. Here, applied brake pressure at the front is also applied to the rear wheel and vice versa. If the front lever is applied, the pressure is built up at 4 of the 6 pots in the 2 calipers at the front. A secondary master cylinder at the front wheel distributes remaining pressure to the rear wheel through a proportional control valve and acts on 2 of the 3 calipers. If a strong brake force is applied at the rear wheel force is also distributed to 2 of the 6 pots of the front wheel. More modern dual CBS use front and rear calipers (and all pots) according to a preset load ratio of front to rear. The proportioning was originally controlled by complex all-hydraulic systems interlinking the front and rear, with a fixed delay or by sensing weight distribution changes. As early as 2001 an electrohydraulic system was introduced by BMW. CBS and ABS CBS helps to reduce the danger of wheel locks and fall downs but in certain situations, it is possible that CBS causes a fall down. If brake pressure is distributed from the rear wheel to the front wheel and the friction of the surfaces changes suddenly (puddle, ice on the street) the front wheel might lock even if only the rear brake has been applied. This would lead to a loss of stability and a fall down. CBS is therefore combined with ABS to avoid this on a motorcycle. Different approaches are possible to realize this combination: Without active pressure Build up Single Version: A third additional channel links the rear wheel circuit through a delay valve to the front brake. Strong brake pressure at the rear wheel (or both wheels) pressurizes both brake circuits however this pressure is adjusted according to wheel speed and brake slip. The dual version combines Hondas Dual CBS with a secondary master cylinder and a proportional control valve [with Piston ABS] A modulator regulates the pressure for each With Active Pressure Build up In 2009, Honda introduced the electronic controlled combined ABS for its high-performance sports bikes which utilize brake by wire technology. The brake input of the rider is measured by pressure sensors and the information is provided to an ECU. Together with the information of the wheel speed sensors, the ECU calculates the optimal distribution of pressure to prevent lockups and to provide the best possible deceleration. Based on this output a motor for each wheel operates a pump that builds up and regulates the brake pressure on the wheel. This system offers a fast reaction time because of the brake by wire functionality. The MIB (Motorcycle integral Braking system) from Continental Teves and the eCBS (electronic CBS) in the enhanced Motorcycle ABS from Bosch are results of another approach. These systems are based on the pump and valve approach. Through additional valves, stronger pumps and a more powerful motor the system can actively build up pressure. The input pressure of the rider is measured with pressure sensors at the lever and pedal. The pump then builds up additional pressure adjusted to riding conditions. A partial integral System is designed for working in one direction only: front→rear or rear→front. A fully-integrated system works in both directions. Because these systems are electronically controlled and are able to build up pressure actively, they offer the opportunity to adjust the motorcycle braking behavior to the rider. CBS and ABS can be switched off by experienced riders and also different regulation modes with higher and lower thresholds can be chosen, such as the rain or slick mode in the BMW S1000RR. Safety and legislation Safety The Insurance Institute for Highway Safety (IIHS) conducted a study on the effectiveness of ABS for motorcycles and came to the conclusion that motorcycles above 250 cm3 without ABS are 37 percent more likely to be involved in fatal crashes and a study of the Swedish Road Administration came to the conclusion that 48 percent of all severe and fatal motorcycle accidents above 125 cm3 could be avoided due to motorcycle ABS. These studies caused the EU commission to initiate a legislative process in 2010 that was passed in 2012 and led to ABS for motorcycles above 125 cm3 becoming mandatory from 2016 onwards. Organizations like the Fédération Internationale de l'Automobile and the Institute of advanced Motorists (IAM) demanded the implementation of this legislation already for 2015. On the other hand, some motorcycle riders are protesting against a compulsory ABS for all bikes because they call for a possibility to switch the system off, for off-road usage or for other reasons. In 2011 the United Nations (UN) started the Decade of Action for Road Safety. The main goal is to save 5 million lives until 2020 through global cooperation. One part of their global plan is to: Encourage universal deployment of crash avoidance technologies with proven effectiveness such as Electronic Stability Control and Anti-Lock Braking Systems in motorcycles. Laws and regulations United States In the United States, the NHTSA has mandated ABS in conjunction with electronic stability control under the provisions of FMVSS 126 as of September 1, 2012. European and other international markets ABS is required on all new passenger cars sold in the EU since 2004. Since 2016, the EU has required ABS on all new scooters, motorcycles, tricycles, and quads from 125 cc, otherwise CBS (or ABS). UN Regulation No. 78, related to the braking of vehicles of categories L1, L2, L3, L4 and L5 (motorbikes) is applied by the European Union, Russia, Japan, Turkey, Ukraine, Australia and the United Kingdom. Global technical regulation number 3 related to Motorcycle brake systems is applied by Canada, the European Union, Japan, Russia, and the United States. India Since 1 April 2019, India has required at least single-channel ABS on all new two-wheelers from 125 cc, otherwise CBS (or ABS). ABS has also been mandatory on all new cars and mini-buses from the same date. South American markets Since 1 January 2019, Brazil has required ABS on all new motorcycles from 300 cc. ABS has been mandatory on all new cars since January 2014. From 1 January 2024, Argentina will require ABS on all new motorcycles from 250 cc, CBS (or front wheel ABS) for on-road between 50 and 250cc. Or their electric equivalents. ABS has been mandatory on all new normal cars since January 2014. From February 2025, Chile will require ABS on all new motorcycles from 150 cc or 11 kW, otherwise CBS (or ABS) from 50 cc or 4 kW from February 2026. ABS has been mandatory on all new cars since October 2020. From October 2025, Colombia will require ABS on all new motorcycles from 150 cc or 11 kW, otherwise CBS (or ABS) from 50 cc or 4 kW. From March 2027, Colombia will require ABS on all new motorcycles from 125 cc, below that with CBS (or ABS). See also Left-foot braking Emergency brake assist, or Brake assist system (BAS) Electronic stability control (ESP) Brake-By-Wire (EBS) Further reading References External links Vehicle safety technologies Mechanical power control Motorcycle technology Vehicle braking technologies
Anti-lock braking system
[ "Physics" ]
7,345
[ "Mechanics", "Mechanical power control" ]
59,595
https://en.wikipedia.org/wiki/Heine%E2%80%93Borel%20theorem
In real analysis the Heine–Borel theorem, named after Eduard Heine and Émile Borel, states: For a subset S of Euclidean space Rn, the following two statements are equivalent: S is compact, that is, every open cover of S has a finite subcover S is closed and bounded. History and motivation The history of what today is called the Heine–Borel theorem starts in the 19th century, with the search for solid foundations of real analysis. Central to the theory was the concept of uniform continuity and the theorem stating that every continuous function on a closed and bounded interval is uniformly continuous. Peter Gustav Lejeune Dirichlet was the first to prove this and implicitly he used the existence of a finite subcover of a given open cover of a closed interval in his proof. He used this proof in his 1852 lectures, which were published only in 1904. Later Eduard Heine, Karl Weierstrass and Salvatore Pincherle used similar techniques. Émile Borel in 1895 was the first to state and prove a form of what is now called the Heine–Borel theorem. His formulation was restricted to countable covers. Pierre Cousin (1895), Lebesgue (1898) and Schoenflies (1900) generalized it to arbitrary covers. Proof If a set is compact, then it must be closed. Let S be a subset of Rn. Observe first the following: if a is a limit point of S, then any finite collection C of open sets, such that each open set U ∈ C is disjoint from some neighborhood VU of a, fails to be a cover of S. Indeed, the intersection of the finite family of sets VU is a neighborhood W of a in Rn. Since a is a limit point of S, W must contain a point x in S. This x ∈ S is not covered by the family C, because every U in C is disjoint from VU and hence disjoint from W, which contains x. If S is compact but not closed, then it has a limit point a not in S. Consider a collection consisting of an open neighborhood N(x) for each x ∈ S, chosen small enough to not intersect some neighborhood Vx of a. Then is an open cover of S, but any finite subcollection of has the form of C discussed previously, and thus cannot be an open subcover of S. This contradicts the compactness of S. Hence, every limit point of S is in S, so S is closed. The proof above applies with almost no change to showing that any compact subset S of a Hausdorff topological space X is closed in X. If a set is compact, then it is bounded. Let be a compact set in , and a ball of radius 1 centered at . Then the set of all such balls centered at is clearly an open cover of , since contains all of . Since is compact, take a finite subcover of this cover. This subcover is the finite union of balls of radius 1. Consider all pairs of centers of these (finitely many) balls (of radius 1) and let be the maximum of the distances between them. Then if and are the centers (respectively) of unit balls containing arbitrary , the triangle inequality says: So the diameter of is bounded by . Lemma: A closed subset of a compact set is compact. Let K be a closed subset of a compact set T in Rn and let CK be an open cover of K. Then is an open set and is an open cover of T. Since T is compact, then CT has a finite subcover that also covers the smaller set K. Since U does not contain any point of K, the set K is already covered by that is a finite subcollection of the original collection CK. It is thus possible to extract from any open cover CK of K a finite subcover. If a set is closed and bounded, then it is compact. If a set S in Rn is bounded, then it can be enclosed within an n-box where a > 0. By the lemma above, it is enough to show that T0 is compact. Assume, by way of contradiction, that T0 is not compact. Then there exists an infinite open cover C of T0 that does not admit any finite subcover. Through bisection of each of the sides of T0, the box T0 can be broken up into 2n sub n-boxes, each of which has diameter equal to half the diameter of T0. Then at least one of the 2n sections of T0 must require an infinite subcover of C, otherwise C itself would have a finite subcover, by uniting together the finite covers of the sections. Call this section T1. Likewise, the sides of T1 can be bisected, yielding 2n sections of T1, at least one of which must require an infinite subcover of C. Continuing in like manner yields a decreasing sequence of nested n-boxes: where the side length of Tk is , which tends to 0 as k tends to infinity. Let us define a sequence (xk) such that each xk is in Tk. This sequence is Cauchy, so it must converge to some limit L. Since each Tk is closed, and for each k the sequence (xk) is eventually always inside Tk, we see that L ∈ Tk for each k. Since C covers T0, then it has some member U ∈ C such that L ∈ U. Since U is open, there is an n-ball . For large enough k, one has , but then the infinite number of members of C needed to cover Tk can be replaced by just one: U, a contradiction. Thus, T0 is compact. Since S is closed and a subset of the compact set T0, then S is also compact (see the lemma above). Generalization of the Heine-Borel theorem In general metric spaces, we have the following theorem: For a subset of a metric space , the following two statements are equivalent: is compact, is precompact and complete. The above follows directly from Jean Dieudonné, theorem 3.16.1, which states: For a metric space , the following three conditions are equivalent: (a) is compact; (b) any infinite sequence in has at least a cluster value; (c) is precompact and complete. Heine–Borel property The Heine–Borel theorem does not hold as stated for general metric and topological vector spaces, and this gives rise to the necessity to consider special classes of spaces where this proposition is true. These spaces are said to have the Heine–Borel property. In the theory of metric spaces A metric space is said to have the Heine–Borel property if each closed bounded set in is compact. Many metric spaces fail to have the Heine–Borel property, such as the metric space of rational numbers (or indeed any incomplete metric space). Complete metric spaces may also fail to have the property; for instance, no infinite-dimensional Banach spaces have the Heine–Borel property (as metric spaces). Even more trivially, if the real line is not endowed with the usual metric, it may fail to have the Heine–Borel property. A metric space has a Heine–Borel metric which is Cauchy locally identical to if and only if it is complete, -compact, and locally compact. In the theory of topological vector spaces A topological vector space is said to have the Heine–Borel property (R.E. Edwards uses the term boundedly compact space) if each closed bounded set in is compact. No infinite-dimensional Banach spaces have the Heine–Borel property (as topological vector spaces). But some infinite-dimensional Fréchet spaces do have, for instance, the space of smooth functions on an open set and the space of holomorphic functions on an open set . More generally, any quasi-complete nuclear space has the Heine–Borel property. All Montel spaces have the Heine–Borel property as well. See also Bolzano–Weierstrass theorem Notes References BookOfProofs: Heine-Borel Property External links Mathworld "Heine-Borel Theorem" "An Analysis of the First Proofs of the Heine-Borel Theorem - Lebesgue's Proof" Theorems in real analysis General topology Properties of topological spaces Compactness theorems Articles containing proofs
Heine–Borel theorem
[ "Mathematics" ]
1,766
[ "Compactness theorems", "General topology", "Theorems in mathematical analysis", "Properties of topological spaces", "Theorems in real analysis", "Space (mathematics)", "Theorems in topology", "Topological spaces", "Topology", "Articles containing proofs" ]
59,611
https://en.wikipedia.org/wiki/Ionization
Ionization (or ionisation specifically in Britain, Ireland, Australia and New Zealand) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules, electrons, positrons, protons, antiprotons and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected. Uses Everyday examples of gas ionization occur within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in medical treatment (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application. Production of ions Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization. Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vector of all collision fragments (the scattered projectile, the recoiling target-ion, and the ejected electron) are determined, have contributed to major advances in the theoretical understanding of the few-body problem in recent years. Adiabatic ionization Adiabatic ionization is a form of ionization in which an electron is removed from or added to an atom or molecule in its lowest energy state to form an ion in its lowest energy state. The Townsend discharge is a good example of the creation of positive ions and free electrons due to ion impact. It is a cascade reaction involving electrons in a region with a sufficiently high electric field in a gaseous medium that can be ionized, such as air. Following an original ionization event, due to such as ionizing radiation, the positive ion drifts towards the cathode, while the free electron drifts towards the anode of the device. If the electric field is strong enough, the free electron gains sufficient energy to liberate a further electron when it next collides with another molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause impact ionization when the next collisions occur; and so on. This is effectively a chain reaction of electron generation, and is dependent on the free electrons gaining sufficient energy between collisions to sustain the avalanche. Ionization efficiency is the ratio of the number of ions formed to the number of electrons or photons used. Ionization energy of atoms The trend in the ionization energy of atoms is often used to demonstrate the periodic behavior of atoms with respect to the atomic number, as summarized by ordering atoms in Mendeleev's table. This is a valuable tool for establishing and understanding the ordering of electrons in atomic orbitals without going into the details of wave functions or the ionization process. An example is presented in the figure to the right. The periodic abrupt decrease in ionization potential after rare gas atoms, for instance, indicates the emergence of a new shell in alkali metals. In addition, the local maximums in the ionization energy plot, moving from left to right in a row, are indicative of s, p, d, and f sub-shells. Semi-classical description of ionization Classical physics and the Bohr model of the atom can qualitatively explain photoionization and collision-mediated ionization. In these cases, during the ionization process, the energy of the electron exceeds the energy difference of the potential barrier it is trying to pass. The classical description, however, cannot describe tunnel ionization since the process involves the passage of electron through a classically forbidden potential barrier. Quantum mechanical description of ionization The interaction of atoms and molecules with sufficiently strong laser pulses or with other charged particles leads to the ionization to singly or multiply charged ions. The ionization rate, i.e. the ionization probability in unit time, can be calculated using quantum mechanics. (There are classical methods available also, like the Classical Trajectory Monte Carlo Method (CTMC), but it is not overall accepted and often criticized by the community.) There are two quantum mechanical methods exist, perturbative and non-perturbative methods like time-dependent coupled-channel or time independent close coupling methods where the wave function is expanded in a finite basis set. There are numerous options available e.g. B-splines, generalized Sturmians or Coulomb wave packets. Another non-perturbative method is to solve the corresponding Schrödinger equation fully numerically on a lattice. In general, the analytic solutions are not available, and the approximations required for manageable numerical calculations do not provide accurate enough results. However, when the laser intensity is sufficiently high, the detailed structure of the atom or molecule can be ignored and analytic solution for the ionization rate is possible. Tunnel ionization Tunnel ionization is ionization due to quantum tunneling. In classical ionization, an electron must have enough energy to make it over the potential barrier, but quantum tunneling allows the electron simply to go through the potential barrier instead of going all the way over it because of the wave nature of the electron. The probability of an electron's tunneling through the barrier drops off exponentially with the width of the potential barrier. Therefore, an electron with a higher energy can make it further up the potential barrier, leaving a much thinner barrier to tunnel through and thus a greater chance to do so. In practice, tunnel ionization is observable when the atom or molecule is interacting with near-infrared strong laser pulses. This process can be understood as a process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. This picture is generally known as multiphoton ionization (MPI). Keldysh modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states. In this model the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As it is observed from figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. Perelomov et al. included the Coulomb interaction at larger internuclear distances. Their model (which we call the PPT model) was derived for short range potential and includes the effect of the long range Coulomb interaction through the first order correction in the quasi-classical action. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:Sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fit very well the experimental ion yields for all rare gases in the intermediate regime of the Keldysh parameter. The rate of MPI on atom with an ionization potential in a linearly polarized laser with frequency is given by where is the Keldysh parameter, , is the peak electric field of the laser and . The coefficients , and are given by The coefficient is given by where Quasi-static tunnel ionization The quasi-static tunneling (QST) is the ionization whose rate can be satisfactorily predicted by the ADK model, i.e. the limit of the PPT model when approaches zero. The rate of QST is given by As compared to the absence of summation over n, which represent different above threshold ionization (ATI) peaks, is remarkable. Strong field approximation for the ionization rate The calculations of PPT are done in the E-gauge, meaning that the laser field is taken as electromagnetic waves. The ionization rate can also be calculated in A-gauge, which emphasizes the particle nature of light (absorbing multiple photons during ionization). This approach was adopted by Krainov model based on the earlier works of Faisal and Reiss. The resulting rate is given by where: with being the ponderomotive energy, is the minimum number of photons necessary to ionize the atom, is the double Bessel function, with the angle between the momentum of the electron, p, and the electric field of the laser, F, FT is the three-dimensional Fourier transform, and incorporates the Coulomb correction in the SFA model. Population trapping In calculating the rate of MPI of atoms only transitions to the continuum states are considered. Such an approximation is acceptable as long as there is no multiphoton resonance between the ground state and some excited states. However, in real situation of interaction with pulsed lasers, during the evolution of laser intensity, due to different Stark shift of the ground and excited states there is a possibility that some excited state go into multiphoton resonance with the ground state. Within the dressed atom picture, the ground state dressed by photons and the resonant state undergo an avoided crossing at the resonance intensity . The minimum distance, , at the avoided crossing is proportional to the generalized Rabi frequency, coupling the two states. According to Story et al., the probability of remaining in the ground state, , is given by where is the time-dependent energy difference between the two dressed states. In interaction with a short pulse, if the dynamic resonance is reached in the rising or the falling part of the pulse, the population practically remains in the ground state and the effect of multiphoton resonances may be neglected. However, if the states go onto resonance at the peak of the pulse, where , then the excited state is populated. After being populated, since the ionization potential of the excited state is small, it is expected that the electron will be instantly ionized. In 1992, de Boer and Muller showed that Xe atoms subjected to short laser pulses could survive in the highly excited states 4f, 5f, and 6f. These states were believed to have been excited by the dynamic Stark shift of the levels into multiphoton resonance with the field during the rising part of the laser pulse. Subsequent evolution of the laser pulse did not completely ionize these states, leaving behind some highly excited atoms. We shall refer to this phenomenon as "population trapping". We mention the theoretical calculation that incomplete ionization occurs whenever there is parallel resonant excitation into a common level with ionization loss. We consider a state such as 6f of Xe which consists of 7 quasi-degnerate levels in the range of the laser bandwidth. These levels along with the continuum constitute a lambda system. The mechanism of the lambda type trapping is schematically presented in figure. At the rising part of the pulse (a) the excited state (with two degenerate levels 1 and 2) are not in multiphoton resonance with the ground state. The electron is ionized through multiphoton coupling with the continuum. As the intensity of the pulse is increased the excited state and the continuum are shifted in energy due to the Stark shift. At the peak of the pulse (b) the excited states go into multiphoton resonance with the ground state. As the intensity starts to decrease (c), the two state are coupled through continuum and the population is trapped in a coherent superposition of the two states. Under subsequent action of the same pulse, due to interference in the transition amplitudes of the lambda system, the field cannot ionize the population completely and a fraction of the population will be trapped in a coherent superposition of the quasi degenerate levels. According to this explanation the states with higher angular momentum – with more sublevels – would have a higher probability of trapping the population. In general the strength of the trapping will be determined by the strength of the two photon coupling between the quasi-degenerate levels via the continuum. In 1996, using a very stable laser and by minimizing the masking effects of the focal region expansion with increasing intensity, Talebpour et al. observed structures on the curves of singly charged ions of Xe, Kr and Ar. These structures were attributed to electron trapping in the strong laser field. A more unambiguous demonstration of population trapping has been reported by T. Morishita and C. D. Lin. Non-sequential multiple ionization The phenomenon of non-sequential ionization (NSI) of atoms exposed to intense laser fields has been a subject of many theoretical and experimental studies since 1983. The pioneering work began with the observation of a "knee" structure on the Xe2+ ion signal versus intensity curve by L’Huillier et al. From the experimental point of view, the NS double ionization refers to processes which somehow enhance the rate of production of doubly charged ions by a huge factor at intensities below the saturation intensity of the singly charged ion. Many, on the other hand, prefer to define the NSI as a process by which two electrons are ionized nearly simultaneously. This definition implies that apart from the sequential channel there is another channel which is the main contribution to the production of doubly charged ions at lower intensities. The first observation of triple NSI in argon interacting with a 1 μm laser was reported by Augst et al. Later, systematically studying the NSI of all rare gas atoms, the quadruple NSI of Xe was observed. The most important conclusion of this study was the observation of the following relation between the rate of NSI to any charge state and the rate of tunnel ionization (predicted by the ADK formula) to the previous charge states; where is the rate of quasi-static tunneling to i'th charge state and are some constants depending on the wavelength of the laser (but not on the pulse duration). Two models have been proposed to explain the non-sequential ionization; the shake-off model and electron re-scattering model. The shake-off (SO) model, first proposed by Fittinghoff et al., is adopted from the field of ionization of atoms by X rays and electron projectiles where the SO process is one of the major mechanisms responsible for the multiple ionization of atoms. The SO model describes the NSI process as a mechanism where one electron is ionized by the laser field and the departure of this electron is so rapid that the remaining electrons do not have enough time to adjust themselves to the new energy states. Therefore, there is a certain probability that, after the ionization of the first electron, a second electron is excited to states with higher energy (shake-up) or even ionized (shake-off). We should mention that, until now, there has been no quantitative calculation based on the SO model, and the model is still qualitative. The electron rescattering model was independently developed by Kuchiev, Schafer et al, Corkum, Becker and Faisal and Faisal and Becker. The principal features of the model can be understood easily from Corkum's version. Corkum's model describes the NS ionization as a process whereby an electron is tunnel ionized. The electron then interacts with the laser field where it is accelerated away from the nuclear core. If the electron has been ionized at an appropriate phase of the field, it will pass by the position of the remaining ion half a cycle later, where it can free an additional electron by electron impact. Only half of the time the electron is released with the appropriate phase and the other half it never return to the nuclear core. The maximum kinetic energy that the returning electron can have is 3.17 times the ponderomotive potential () of the laser. Corkum's model places a cut-off limit on the minimum intensity ( is proportional to intensity) where ionization due to re-scattering can occur. The re-scattering model in Kuchiev's version (Kuchiev's model) is quantum mechanical. The basic idea of the model is illustrated by Feynman diagrams in figure a. First both electrons are in the ground state of an atom. The lines marked a and b describe the corresponding atomic states. Then the electron a is ionized. The beginning of the ionization process is shown by the intersection with a sloped dashed line. where the MPI occurs. The propagation of the ionized electron in the laser field, during which it absorbs other photons (ATI), is shown by the full thick line. The collision of this electron with the parent atomic ion is shown by a vertical dotted line representing the Coulomb interaction between the electrons. The state marked with c describes the ion excitation to a discrete or continuum state. Figure b describes the exchange process. Kuchiev's model, contrary to Corkum's model, does not predict any threshold intensity for the occurrence of NS ionization. Kuchiev did not include the Coulomb effects on the dynamics of the ionized electron. This resulted in the underestimation of the double ionization rate by a huge factor. Obviously, in the approach of Becker and Faisal (which is equivalent to Kuchiev's model in spirit), this drawback does not exist. In fact, their model is more exact and does not suffer from the large number of approximations made by Kuchiev. Their calculation results perfectly fit with the experimental results of Walker et al. Becker and Faisal have been able to fit the experimental results on the multiple NSI of rare gas atoms using their model. As a result, the electron re-scattering can be taken as the main mechanism for the occurrence of the NSI process. Multiphoton ionization of inner-valence electrons and fragmentation of polyatomic molecules The ionization of inner valence electrons are responsible for the fragmentation of polyatomic molecules in strong laser fields. According to a qualitative model the dissociation of the molecules occurs through a three-step mechanism: MPI of electrons from the inner orbitals of the molecule which results in a molecular ion in ro-vibrational levels of an excited electronic state; Rapid radiationless transition to the high-lying ro-vibrational levels of a lower electronic state; and Subsequent dissociation of the ion to different fragments through various fragmentation channels. The short pulse induced molecular fragmentation may be used as an ion source for high performance mass spectroscopy. The selectivity provided by a short pulse based source is superior to that expected when using the conventional electron ionization based sources, in particular when the identification of optical isomers is required. Kramers–Henneberger frame The Kramers–Henneberger(KF) frame is the non-inertial frame moving with the free electron under the influence of the harmonic laser pulse, obtained by applying a translation to the laboratory frame equal to the quiver motion of a classical electron in the laboratory frame. In other words, in the Kramers–Henneberger frame the classical electron is at rest. Starting in the lab frame (velocity gauge), we may describe the electron with the Hamiltonian: In the dipole approximation, the quiver motion of a classical electron in the laboratory frame for an arbitrary field can be obtained from the vector potential of the electromagnetic field: where for a monochromatic plane wave. By applying a transformation to the laboratory frame equal to the quiver motion one moves to the ‘oscillating’ or ‘Kramers–Henneberger’ frame, in which the classical electron is at rest. By a phase factor transformation for convenience one obtains the ‘space-translated’ Hamiltonian, which is unitarily equivalent to the lab-frame Hamiltonian, which contains the original potential centered on the oscillating point : The utility of the KH frame lies in the fact that in this frame the laser-atom interaction can be reduced to the form of an oscillating potential energy, where the natural parameters describing the electron dynamics are and (sometimes called the “excursion amplitude’, obtained from ). From here one can apply Floquet theory to calculate quasi-stationary solutions of the TDSE. In high frequency Floquet theory, to lowest order in the system reduces to the so-called ‘structure equation’, which has the form of a typical energy-eigenvalue Schrödinger equation containing the ‘dressed potential’ (the cycle-average of the oscillating potential). The interpretation of the presence of is as follows: in the oscillating frame, the nucleus has an oscillatory motion of trajectory and can be seen as the potential of the smeared out nuclear charge along its trajectory. The KH frame is thus employed in theoretical studies of strong-field ionization and atomic stabilization (a predicted phenomenon in which the ionization probability of an atom in a high-intensity, high-frequency field actually decreases for intensities above a certain threshold) in conjunction with high-frequency Floquet theory. The KF frame was successfully applied for different problems as well e.g. for higher-hamonic generation from a metal surface in a powerful laser field Dissociation – distinction A substance may dissociate without necessarily producing ions. As an example, the molecules of table sugar dissociate in water (sugar is dissolved) but exist as intact neutral entities. Another subtle event is the dissociation of sodium chloride (table salt) into sodium and chlorine ions. Although it may seem as a case of ionization, in reality the ions already exist within the crystal lattice. When salt is dissociated, its constituent ions are simply surrounded by water molecules and their effects are visible (e.g. the solution becomes electrolytic). However, no transfer or displacement of electrons occurs. See also Above threshold ionization Double ionization Chemical ionization Electron ionization Ionization chamber – Instrument for detecting gaseous ionization, used in ionizing radiation measurements Ion source Photoionization Thermal ionization Townsend avalanche – The chain reaction of ionization occurring in a gas with an applied electric field Poole–Frenkel effect Table References External links Ions Molecular physics Atomic physics Physical chemistry Quantum chemistry Mass spectrometry
Ionization
[ "Physics", "Chemistry" ]
4,774
[ "Ionization", "Physical phenomena", "Mass", "Phases of matter", "Quantum mechanics", "Theoretical chemistry", "Statistical mechanics", "Physical chemistry", "Ions", "Phase transitions", "Instrumental analysis", "Mass spectrometry", " molecular", " and optical physics", "Molecular physics...
59,613
https://en.wikipedia.org/wiki/Ionization%20energy
In physics and chemistry, ionization energy (IE) is the minimum energy required to remove the most loosely bound electron of an isolated gaseous atom, positive ion, or molecule. The first ionization energy is quantitatively expressed as X(g) + energy ⟶ X+(g) + e− where X is any atom or molecule, X+ is the resultant ion when the original atom was stripped of a single electron, and e− is the removed electron. Ionization energy is positive for neutral atoms, meaning that the ionization is an endothermic process. Roughly speaking, the closer the outermost electrons are to the nucleus of the atom, the higher the atom's ionization energy. In physics, ionization energy is usually expressed in electronvolts (eV) or joules (J). In chemistry, it is expressed as the energy to ionize a mole of atoms or molecules, usually as kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Comparison of ionization energies of atoms in the periodic table reveals two periodic trends which follow the rules of Coulombic attraction: Ionization energy generally increases from left to right within a given period (that is, row). Ionization energy generally decreases from top to bottom in a given group (that is, column). The latter trend results from the outer electron shell being progressively farther from the nucleus, with the addition of one inner shell per row as one moves down the column. The nth ionization energy refers to the amount of energy required to remove the most loosely bound electron from the species having a positive charge of (n − 1). For example, the first three ionization energies are defined as follows: 1st ionization energy is the energy that enables the reaction X ⟶ X+ + e− 2nd ionization energy is the energy that enables the reaction X+ ⟶ X2+ + e− 3rd ionization energy is the energy that enables the reaction X2+ ⟶ X3+ + e− The most notable influences that determine ionization energy include: Electron configuration: This accounts for most elements' IE, as all of their chemical and physical characteristics can be ascertained just by determining their respective electron configuration. Nuclear charge: If the nuclear charge (atomic number) is greater, the electrons are held more tightly by the nucleus and hence the ionization energy will be greater (leading to the mentioned trend 1 within a given period). Number of electron shells: If the size of the atom is greater due to the presence of more shells, the electrons are held less tightly by the nucleus and the ionization energy will be smaller. Effective nuclear charge (Zeff): If the magnitude of electron shielding and penetration are greater, the electrons are held less tightly by the nucleus, the Zeff of the electron and the ionization energy is smaller. Stability: An atom having a more stable electronic configuration has a reduced tendency to lose electrons and consequently has a higher ionization energy. Minor influences include: Relativistic effects: Heavier elements (especially those whose atomic number is greater than about 70) are affected by these as their electrons are approaching the speed of light. They therefore have smaller atomic radii and higher ionization energies. Lanthanide and actinide contraction (and scandide contraction): The shrinking of the elements affects the ionization energy, as the net charge of the nucleus is more strongly felt. Electron pairing energies: Half-filled subshells usually result in higher ionization energies. The term ionization potential is an older and obsolete term for ionization energy, because the oldest method of measuring ionization energy was based on ionizing a sample and accelerating the electron removed using an electrostatic potential. Determination of ionization energies The ionization energy of atoms, denoted Ei, is measured by finding the minimal energy of light quanta (photons) or electrons accelerated to a known energy that will kick out the least bound atomic electrons. The measurement is performed in the gas phase on single atoms. While only noble gases occur as monatomic gases, other gases can be split into single atoms. Also, many solid elements can be heated and vaporized into single atoms. Monatomic vapor is contained in a previously evacuated tube that has two parallel electrodes connected to a voltage source. The ionizing excitation is introduced through the walls of the tube or produced within. When ultraviolet light is used, the wavelength is swept down the ultraviolet range. At a certain wavelength (λ) and frequency of light (ν=c/λ, where c is the speed of light), the light quanta, whose energy is proportional to the frequency, will have energy high enough to dislodge the least bound electrons. These electrons will be attracted to the positive electrode, and the positive ions remaining after the photoionization will get attracted to the negatively charged electrode. These electrons and ions will establish a current through the tube. The ionization energy will be the energy of photons hνi (h is the Planck constant) that caused a steep rise in the current: Ei = hνi. When high-velocity electrons are used to ionize the atoms, they are produced by an electron gun inside a similar evacuated tube. The energy of the electron beam can be controlled by the acceleration voltages. The energy of these electrons that gives rise to a sharp onset of the current of ions and freed electrons through the tube will match the ionization energy of the atoms. Atoms: values and trends Generally, the (N+1)th ionization energy of a particular element is larger than the Nth ionization energy (it may also be noted that the ionization energy of an anion is generally less than that of cations and neutral atom for the same element). When the next ionization energy involves removing an electron from the same electron shell, the increase in ionization energy is primarily due to the increased net charge of the ion from which the electron is being removed. Electrons removed from more highly charged ions experience greater forces of electrostatic attraction; thus, their removal requires more energy. In addition, when the next ionization energy involves removing an electron from a lower electron shell, the greatly decreased distance between the nucleus and the electron also increases both the electrostatic force and the distance over which that force must be overcome to remove the electron. Both of these factors further increase the ionization energy. Some values for elements of the third period are given in the following table: Large jumps in the successive molar ionization energies occur when passing noble gas configurations. For example, as can be seen in the table above, the first two molar ionization energies of magnesium (stripping the two 3s electrons from a magnesium atom) are much smaller than the third, which requires stripping off a 2p electron from the neon configuration of Mg2+. That 2p electron is much closer to the nucleus than the 3s electrons removed previously. Ionization energy is also a periodic trend within the periodic table. Moving left to right within a period, or upward within a group, the first ionization energy generally increases, with exceptions such as aluminium and sulfur in the table above. As the nuclear charge of the nucleus increases across the period, the electrostatic attraction increases between electrons and protons, hence the atomic radius decreases, and the electron cloud comes closer to the nucleus because the electrons, especially the outermost one, are held more tightly by the higher effective nuclear charge. On moving downward within a given group, the electrons are held in higher-energy shells with higher principal quantum number n, further from the nucleus and therefore are more loosely bound so that the ionization energy decreases. The effective nuclear charge increases only slowly so that its effect is outweighed by the increase in n. Exceptions in ionization energies There are exceptions to the general trend of rising ionization energies within a period. For example, the value decreases from beryllium (: 9.3 eV) to boron (: 8.3 eV), and from nitrogen (: 14.5 eV) to oxygen (: 13.6 eV). These dips can be explained in terms of electron configurations. Boron has its last electron in a 2p orbital, which has its electron density further away from the nucleus on average than the 2s electrons in the same shell. The 2s electrons then shield the 2p electron from the nucleus to some extent, and it is easier to remove the 2p electron from boron than to remove a 2s electron from beryllium, resulting in a lower ionization energy for B. In oxygen, the last electron shares a doubly occupied p-orbital with an electron of opposing spin. The two electrons in the same orbital are closer together on average than two electrons in different orbitals, so that they shield each other from the nucleus more effectively and it is easier to remove one electron, resulting in a lower ionization energy. Furthermore, after every noble gas element, the ionization energy drastically drops. This occurs because the outer electron in the alkali metals requires a much lower amount of energy to be removed from the atom than the inner shells. This also gives rise to low electronegativity values for the alkali metals. The trends and exceptions are summarized in the following subsections: Ionization energy decreases when Transitioning to a new period: an alkali metal easily loses one electron to leave an octet or pseudo-noble gas configuration, so those elements have only small values for IE. Moving from the s-block to the p-block: a p-orbital loses an electron more easily. An example is beryllium to boron, with electron configuration 1s2 2s2 2p1. The 2s electrons shield the higher-energy 2p electron from the nucleus, making it slightly easier to remove. This also happens from magnesium to aluminium. Occupying a p-subshell with its first electron with spin opposed to the other electrons: such as in nitrogen (: 14.5 eV) to oxygen (: 13.6 eV), as well as phosphorus (: 10.48 eV) to sulfur (: 10.36 eV). The reason for this is because oxygen, sulfur and selenium all have dipping ionization energies because of shielding effects. However, this discontinues starting from tellurium where the shielding is too small to produce a dip. Moving from the d-block to the p-block: as in the case of zinc (: 9.4 eV) to gallium (: 6.0 eV) Special case: decrease from lead (: 7.42 eV) to bismuth (: 7.29 eV). This cannot be attributed to size (the difference is minimal: lead has a covalent radius of 146 pm whereas bismuth's is 148 pm). This is due to the spin-orbit splitting of the 6p shell (lead is removing an electron from the stabilised 6p1/2 level, but bismuth is removing one from the destabilised 6p3/2 level). Predicted ionization energies show a much greater decrease from flerovium to moscovium, one row further down the periodic table and with much larger spin-orbit effects. Special case: decrease from radium (: 5.27 eV) to actinium (: 5.17 eV), which is a switch from an s to a d orbital. However the analogous switch from barium (: 5.2 eV) to lanthanum (: 5.6 eV) does not show a downward change. Lutetium () and lawrencium () both have ionization energies lower than the previous elements. In both cases the last electron added starts a new subshell: 5d for Lu with electron configuration [Xe] 4f14 5d1 6s2, and 7p for Lr with configuration [Rn] 5f4 7s2 7p1. These dips in ionization energies for lutetium and especially lawrencium show that these elements belong in the d-block, and not lanthanum and actinium. Ionization energy increases when Reaching Group 18 noble gas elements: This is due to their complete electron subshells, so that these elements require large amounts of energy to remove one electron. Group 12: The elements here, zinc (: 9.4 eV), cadmium (: 9.0 eV) and mercury (: 10.4 eV) all record sudden rising IE values in contrast to their preceding elements: copper (: 7.7 eV), silver (: 7.6 eV) and gold (: 9.2 eV), respectively. For mercury, it can be extrapolated that the relativistic stabilization of the 6s electrons increases the ionization energy, in addition to poor shielding by 4f electrons that increases the effective nuclear charge on the outer valence electrons. In addition, the closed-subshells electron configurations: [Ar] 3d10 4s2, [Kr] 4d105s2 and [Xe] 4f14 5d10 6s2 provide increased stability. Special case: shift from rhodium (: 7.5 eV) to palladium (: 8.3 eV). Unlike other Group 10 elements, palladium has a higher ionization energy than the preceding atom, due to its electron configuration. In contrast to nickel's [Ar] 3d8 4s2, and platinum's [Xe] 4f14 5d9 6s1, palladium's electron configuration is [Kr] 4d10 5s0 (even though the Madelung rule predicts [Kr] 4d8 5s2). Finally, silver's lower IE (: 7.6 eV) further accentuates the high value for palladium; the single added s electron is removed with a lower ionization energy than palladium, which emphasizes palladium's high IE (as shown in the above linear table values for IE) The IE of gadolinium (: 6.15 eV) is somewhat higher than both the preceding (: 5.64 eV), (: 5.67 eV) and following elements (: 5.86 eV), (: 5.94 eV). This anomaly is due to the fact that gadolinium valence d-subshell borrows 1 electron from the valence f-subshell. Now the valence subshell is the d-subshell, and due to the poor shielding of positive nuclear charge by electrons of the f-subshell, the electron of the valence d-subshell experiences a greater attraction to the nucleus, therefore increasing the energy required to remove the (outermost) valence electron. Moving into d-block elements: The elements Sc with a 3d1 electronic configuration has a higher IP (: 6.56 eV) than the preceding element (: 6.11 eV), contrary to the decreases on moving into s-block and p-block elements. The 4s and 3d electrons have similar shielding ability: the 3d orbital forms part of the n=3 shell whose average position is closer to the nucleus than the 4s orbital and the n=4 shell, but electrons in s orbitals experience greater penetration into the nucleus than electrons in d orbitals. So the mutual shielding of 3d and 4s electrons is weak, and the effective nuclear charge acting on the ionized electron is relatively large. Yttrium () similarly has a higher IP (6.22 eV) than : 5.69 eV. Moving into f-block elements; The elements (: 5.18 eV) and (: 5.17 eV) have only very slightly lower IP's than their preceding elements (: 5.21 eV) and (: 5.18 eV), though their atoms are anomalies in that they add a d-electron rather than an f-electron. As can be seen in the above graph for ionization energies, the sharp rise in IE values from (: 3.89 eV) to (: 5.21 eV) is followed by a small increase (with some fluctuations) as the f-block proceeds from to . This is due to the lanthanide contraction (for lanthanides). This decrease in ionic radius is associated with an increase in ionization energy in turn increases, since the two properties correlate to each other. As for d-block elements, the electrons are added in an inner shell, so that no new shells are formed. The shape of the added orbitals prevents them from penetrating to the nucleus so that the electrons occupying them have less shielding capacity. Ionization energy anomalies in groups Ionization energy values tend to decrease on going to heavier elements within a group as shielding is provided by more electrons and overall, the valence shells experience a weaker attraction from the nucleus, attributed to the larger covalent radius which increase on going down a group Nonetheless, this is not always the case. As one exception, in Group 10 palladium (: 8.34 eV) has a higher ionization energy than nickel (: 7.64 eV), contrary to the general decrease for the elements from technetium to xenon . Such anomalies are summarized below: Group 1: Hydrogen's ionization energy is very high (at 13.59844 eV), compared to the alkali metals. This is due to its single electron (and hence, very small electron cloud), which is close to the nucleus. Likewise, since there are not any other electrons that may cause shielding, that single electron experiences the full net positive charge of the nucleus. Francium's ionization energy is higher than the precedent alkali metal, cesium. This is due to its (and radium's) small ionic radii owing to relativistic effects. Because of their large mass and size, this means that its electrons are traveling at extremely high speeds, which results in the electrons coming closer to the nucleus than expected, and they are consequently harder to remove (higher IE). Group 2: Radium's ionization energy is higher than its antecedent alkaline earth metal barium, like francium, which is also due to relativistic effects. The electrons, especially the 1s electrons, experience very high effective nuclear charges. To avoid falling into the nucleus, the 1s electrons must move at very high speeds, which causes the special relativistic corrections to be substantially higher than the approximate classical momenta. By the uncertainty principle, this causes a relativistic contraction of the 1s orbital (and other orbitals with electron density close to the nucleus, especially ns and np orbitals). Hence this causes a cascade of electron changes, which finally results in the outermost electron shells contracting and getting closer to the nucleus. Group 4: Hafnium's near similarity in IE with zirconium. The effects of the lanthanide contraction can still be felt after the lanthanides. It can be seen through the former's smaller atomic radius (which contradicts the observed periodic trend ) at 159 pm (empirical value), which differs from the latter's 155 pm. This in turn makes its ionization energies increase by 18 kJ/mol−1. Titanium's IE is smaller than that of both hafnium and zirconium. Hafnium's ionization energy is similar to zirconium's due to lanthanide contraction. However, why zirconium's ionization energy is higher than the preceding elements' remains unclear; we cannot attribute it to atomic radius as it is higher for zirconium and hafnium by 15 pm. We also cannot invoke the condensed ionization energy, as it is more or less the same ([Ar] 3d2 4s2 for titanium, whereas [Kr] 4d2 5s2 for zirconium). Additionally, there are no half-filled nor fully filled orbitals we might compare. Hence, we can only invoke zirconium's full electron configuration, which is 1s22s22p63s23p63d104s24p64d25s2. The presence of a full 3d-block sublevel is tantamount to a higher shielding efficiency compared to the 4d-block elements (which are only two electrons). Group 5: akin to Group 4, niobium and tantalum are analogous to each other, due to their electron configuration and to the lanthanide contraction affecting the latter element. Ipso facto, their significant rise in IE compared to the foremost element in the group, vanadium, can be attributed due to their full d-block electrons, in addition to their electron configuration. Another intriguing notion is niobium's half-filled 5s orbital; due to repulsion and exchange energy (in other words the "costs" of putting an electron in a low-energy sublevel to completely fill it instead of putting the electron in a high-energy one) overcoming the energy gap between s- and d-(or f) block electrons, the EC does not follow the Madelung rule. Group 6: like its forerunners groups 4 and 5, group 6 also record high values when moving downward. Tungsten is once again similar to molybdenum due to their electron configurations. Likewise, it is also attributed to the full 3d-orbital in its electron configuration. Another reason is molybdenum's half filled 4d orbital due to electron pair energies violating the aufbau principle. Groups 7-12 6th period elements (rhenium, osmium, iridium, platinum, gold and mercury): All of these elements have extremely high ionization energies compared to the elements preceding them in their respective groups. The essence of this is due to the lanthanide contraction's influence on post lanthanides, in addition to the relativistic stabilization of the 6s orbital. Group 13: Gallium's IE is higher than aluminum's. This is once again due to d-orbitals, in addition to scandide contraction, providing weak shielding, and hence the effective nuclear charges are augmented. Thallium's IE, due to poor shielding of 4f electrons in addition to lanthanide contraction, causes its IE to be increased in contrast to its precursor indium. Group 14: Lead's unusually high ionization energy (: 7.42 eV) is, akin to that of group 13's thallium, a result of the full 5d and 4f subshells. The lanthanide contraction and the inefficient screening of the nucleus by the 4f electrons results in slightly higher ionization energy for lead than for tin (: 7.34 eV). Bohr model for hydrogen atom The ionization energy of the hydrogen atom () can be evaluated in the Bohr model, which predicts that the atomic energy level has energy RH is the Rydberg constant for the hydrogen atom. For hydrogen in the ground state and so that the energy of the atom before ionization is simply After ionization, the energy is zero for a motionless electron infinitely far from the proton, so that the ionization energy is . This agrees with the experimental value for the hydrogen atom. Quantum-mechanical explanation According to the more complete theory of quantum mechanics, the location of an electron is best described as a probability distribution within an electron cloud, i.e. atomic orbital. The energy can be calculated by integrating over this cloud. The cloud's underlying mathematical representation is the wavefunction, which is built from Slater determinants consisting of molecular spin orbitals. These are related by Pauli's exclusion principle to the antisymmetrized products of the atomic or molecular orbitals. There are two main ways in which ionization energy is calculated. In general, the computation for the Nth ionization energy requires calculating the energies of and electron systems. Calculating these energies exactly is not possible except for the simplest systems (i.e. hydrogen and hydrogen-like elements), primarily because of difficulties in integrating the electron correlation terms. Therefore, approximation methods are routinely employed, with different methods varying in complexity (computational time) and accuracy compared to empirical data. This has become a well-studied problem and is routinely done in computational chemistry. The second way of calculating ionization energies is mainly used at the lowest level of approximation, where the ionization energy is provided by Koopmans' theorem, which involves the highest occupied molecular orbital or "HOMO" and the lowest unoccupied molecular orbital or "LUMO", and states that the ionization energy of an atom or molecule is equal to the negative value of energy of the orbital from which the electron is ejected. This means that the ionization energy is equal to the negative of HOMO energy, which in a formal equation can be written as: Molecules: vertical and adiabatic ionization energy Ionization of molecules often leads to changes in molecular geometry, and two types of (first) ionization energy are defined – adiabatic and vertical. Adiabatic ionization energy The adiabatic ionization energy of a molecule is the minimum amount of energy required to remove an electron from a neutral molecule, i.e. the difference between the energy of the vibrational ground state of the neutral species (v" = 0 level) and that of the positive ion (v' = 0). The specific equilibrium geometry of each species does not affect this value. Vertical ionization energy Due to the possible changes in molecular geometry that may result from ionization, additional transitions may exist between the vibrational ground state of the neutral species and vibrational excited states of the positive ion. In other words, ionization is accompanied by vibrational excitation. The intensity of such transitions is explained by the Franck–Condon principle, which predicts that the most probable and intense transition corresponds to the vibrationally excited state of the positive ion that has the same geometry as the neutral molecule. This transition is referred to as the "vertical" ionization energy since it is represented by a completely vertical line on a potential energy diagram (see Figure). For a diatomic molecule, the geometry is defined by the length of a single bond. The removal of an electron from a bonding molecular orbital weakens the bond and increases the bond length. In Figure 1, the lower potential energy curve is for the neutral molecule and the upper surface is for the positive ion. Both curves plot the potential energy as a function of bond length. The horizontal lines correspond to vibrational levels with their associated vibrational wave functions. Since the ion has a weaker bond, it will have a longer bond length. This effect is represented by shifting the minimum of the potential energy curve to the right of the neutral species. The adiabatic ionization is the diagonal transition to the vibrational ground state of the ion. Vertical ionization may involve vibrational excitation of the ionic state and therefore requires greater energy. In many circumstances, the adiabatic ionization energy is often a more interesting physical quantity since it describes the difference in energy between the two potential energy surfaces. However, due to experimental limitations, the adiabatic ionization energy is often difficult to determine, whereas the vertical detachment energy is easily identifiable and measurable. Analogs of ionization energy to other systems While the term ionization energy is largely used only for gas-phase atomic, cationic, or molecular species, there are a number of analogous quantities that consider the amount of energy required to remove an electron from other physical systems. Electron binding energy Electron binding energy is a generic term for the minimum energy needed to remove an electron from a particular electron shell for an atom or ion, due to these negatively charged electrons being held in place by the electrostatic pull of the positively charged nucleus. For example, the electron binding energy for removing a 3p3/2 electron from the chloride ion is the minimum amount of energy required to remove an electron from the chlorine atom when it has a charge of −1. In this particular example, the electron binding energy has the same magnitude as the electron affinity for the neutral chlorine atom. In another example, the electron binding energy refers to the minimum amount of energy required to remove an electron from the dicarboxylate dianion −O2C(CH2)8CO. The graph to the right shows the binding energy for electrons in different shells in neutral atoms. The ionization energy is the lowest binding energy for a particular atom (although these are not all shown in the graph). Solid surfaces: work function Work function is the minimum amount of energy required to remove an electron from a solid surface, where the work function for a given surface is defined by the difference where is the charge of an electron, is the electrostatic potential in the vacuum nearby the surface, and is the Fermi level (electrochemical potential of electrons) inside the material. Note See also Rydberg equation, a calculation that could determine the ionization energies of hydrogen and hydrogen-like elements. This is further elaborated through this site. Electron affinity, a closely related concept describing the energy released by adding an electron to a neutral atom or molecule. Lattice energy, a measure of the energy released when ions are combined to make a compound. Electronegativity is a number that shares some similarities with ionization energy. Koopmans' theorem, regarding the predicted ionization energies in Hartree–Fock theory. Ditungsten tetra(hpp) has the lowest recorded ionization energy for a stable chemical compound. Bond-dissociation energy, the measure of the strength of a chemical bond calculated through cleaving by homolysis giving two radical fragments A and B and subsequent evaluation of the enthalpy change Bond energy, the average measure of a chemical bond's strength, calculated through the amount of heat needed to break all of the chemical bonds into individual atoms. References Sources Ions Molecular physics Atomic physics Chemical properties Quantum chemistry Binding energy
Ionization energy
[ "Physics", "Chemistry" ]
6,214
[ "Molecular physics", "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Atomic physics", " molecular", "nan", "Atomic", "Ions", "Matter", " and optical physics" ]
59,615
https://en.wikipedia.org/wiki/Electric%20potential
Electric potential (also called the electric field potential, potential drop, the electrostatic potential) is defined as the amount of work/energy needed per unit of electric charge to move the charge from a reference point to a specific point in an electric field. More precisely, the electric potential is the energy per unit charge for a test charge that is so small that the disturbance of the field under consideration is negligible. The motion across the field is supposed to proceed with negligible acceleration, so as to avoid the test charge acquiring kinetic energy or producing radiation. By definition, the electric potential at the reference point is zero units. Typically, the reference point is earth or a point at infinity, although any point can be used. In classical electrostatics, the electrostatic field is a vector quantity expressed as the gradient of the electrostatic potential, which is a scalar quantity denoted by or occasionally , equal to the electric potential energy of any charged particle at any location (measured in joules) divided by the charge of that particle (measured in coulombs). By dividing out the charge on the particle a quotient is obtained that is a property of the electric field itself. In short, an electric potential is the electric potential energy per unit charge. This value can be calculated in either a static (time-invariant) or a dynamic (time-varying) electric field at a specific time with the unit joules per coulomb (J⋅C−1) or volt (V). The electric potential at infinity is assumed to be zero. In electrodynamics, when time-varying fields are present, the electric field cannot be expressed only as a scalar potential. Instead, the electric field can be expressed as both the scalar electric potential and the magnetic vector potential. The electric potential and the magnetic vector potential together form a four-vector, so that the two kinds of potential are mixed under Lorentz transformations. Practically, the electric potential is a continuous function in all space, because a spatial derivative of a discontinuous electric potential yields an electric field of impossibly infinite magnitude. Notably, the electric potential due to an idealized point charge (proportional to , with the distance from the point charge) is continuous in all space except at the location of the point charge. Though electric field is not continuous across an idealized surface charge, it is not infinite at any point. Therefore, the electric potential is continuous across an idealized surface charge. Additionally, an idealized line of charge has electric potential (proportional to , with the radial distance from the line of charge) is continuous everywhere except on the line of charge. Introduction Classical mechanics explores concepts such as force, energy, and potential. Force and potential energy are directly related. A net force acting on any object will cause it to accelerate. As an object moves in the direction of a force acting on it, its potential energy decreases. For example, the gravitational potential energy of a cannonball at the top of a hill is greater than at the base of the hill. As it rolls downhill, its potential energy decreases and is being translated to motion – kinetic energy. It is possible to define the potential of certain force fields so that the potential energy of an object in that field depends only on the position of the object with respect to the field. Two such force fields are a gravitational field and an electric field (in the absence of time-varying magnetic fields). Such fields affect objects because of the intrinsic properties (e.g., mass or charge) and positions of the objects. An object may possess a property known as electric charge. Since an electric field exerts force on a charged object, if the object has a positive charge, the force will be in the direction of the electric field vector at the location of the charge; if the charge is negative, the force will be in the opposite direction. The magnitude of force is given by the quantity of the charge multiplied by the magnitude of the electric field vector, Electrostatics An electric potential at a point in a static electric field is given by the line integral where is an arbitrary path from some fixed reference point to ; it is uniquely determined up to a constant that is added or subtracted from the integral. In electrostatics, the Maxwell-Faraday equation reveals that the curl is zero, making the electric field conservative. Thus, the line integral above does not depend on the specific path chosen but only on its endpoints, making well-defined everywhere. The gradient theorem then allows us to write: This states that the electric field points "downhill" towards lower voltages. By Gauss's law, the potential can also be found to satisfy Poisson's equation: where is the total charge density and denotes the divergence. The concept of electric potential is closely linked with potential energy. A test charge, , has an electric potential energy, , given by The potential energy and hence, also the electric potential, is only defined up to an additive constant: one must arbitrarily choose a position where the potential energy and the electric potential are zero. These equations cannot be used if i.e., in the case of a non-conservative electric field (caused by a changing magnetic field; see Maxwell's equations). The generalization of electric potential to this case is described in the section . Electric potential due to a point charge The electric potential arising from a point charge, , at a distance, , from the location of is observed to be where is the permittivity of vacuum, is known as the Coulomb potential. Note that, in contrast to the magnitude of an electric field due to a point charge, the electric potential scales respective to the reciprocal of the radius, rather than the radius squared. The electric potential at any location, , in a system of point charges is equal to the sum of the individual electric potentials due to every point charge in the system. This fact simplifies calculations significantly, because addition of potential (scalar) fields is much easier than addition of the electric (vector) fields. Specifically, the potential of a set of discrete point charges at points becomes where is a point at which the potential is evaluated; is a point at which there is a nonzero charge; and is the charge at the point . And the potential of a continuous charge distribution becomes where is a point at which the potential is evaluated; is a region containing all the points at which the charge density is nonzero; is a point inside ; and is the charge density at the point . The equations given above for the electric potential (and all the equations used here) are in the forms required by SI units. In some other (less common) systems of units, such as CGS-Gaussian, many of these equations would be altered. Generalization to electrodynamics When time-varying magnetic fields are present (which is true whenever there are time-varying electric fields and vice versa), it is not possible to describe the electric field simply as a scalar potential because the electric field is no longer conservative: is path-dependent because (due to the Maxwell-Faraday equation). Instead, one can still define a scalar potential by also including the magnetic vector potential . In particular, is defined to satisfy: where is the magnetic field. By the fundamental theorem of vector calculus, such an can always be found, since the divergence of the magnetic field is always zero due to the absence of magnetic monopoles. Now, the quantity is a conservative field, since the curl of is canceled by the curl of according to the Maxwell–Faraday equation. One can therefore write where is the scalar potential defined by the conservative field . The electrostatic potential is simply the special case of this definition where is time-invariant. On the other hand, for time-varying fields, unlike electrostatics. Gauge freedom The electrostatic potential could have any constant added to it without affecting the electric field. In electrodynamics, the electric potential has infinitely many degrees of freedom. For any (possibly time-varying or space-varying) scalar field, , we can perform the following gauge transformation to find a new set of potentials that produce exactly the same electric and magnetic fields: Given different choices of gauge, the electric potential could have quite different properties. In the Coulomb gauge, the electric potential is given by Poisson's equation just like in electrostatics. However, in the Lorenz gauge, the electric potential is a retarded potential that propagates at the speed of light and is the solution to an inhomogeneous wave equation: Units The SI derived unit of electric potential is the volt (in honor of Alessandro Volta), denoted as V, which is why the electric potential difference between two points in space is known as a voltage. Older units are rarely used today. Variants of the centimetre–gram–second system of units included a number of different units for electric potential, including the abvolt and the statvolt. Galvani potential versus electrochemical potential Inside metals (and other solids and liquids), the energy of an electron is affected not only by the electric potential, but also by the specific atomic environment that it is in. When a voltmeter is connected between two different types of metal, it measures the potential difference corrected for the different atomic environments. The quantity measured by a voltmeter is called electrochemical potential or fermi level, while the pure unadjusted electric potential, , is sometimes called the Galvani potential, . The terms "voltage" and "electric potential" are a bit ambiguous but one may refer to of these in different contexts. Common formulas See also Absolute electrode potential Electrochemical potential Electrode potential References Further reading Potentials Electrostatics Voltage Electromagnetic quantities
Electric potential
[ "Physics", "Mathematics" ]
1,996
[ "Electromagnetic quantities", "Physical quantities", "Electrical systems", "Quantity", "Physical systems", "Voltage", "Wikipedia categories named after physical quantities" ]
59,616
https://en.wikipedia.org/wiki/Fermentation%20theory
In biochemistry, fermentation theory refers to the historical study of models of natural fermentation processes, especially alcoholic and lactic acid fermentation. Notable contributors to the theory include Justus Von Liebig and Louis Pasteur, the latter of whom developed a purely microbial basis for the fermentation process based on his experiments. Pasteur's work on fermentation later led to his development of the germ theory of disease, which put the concept of spontaneous generation to rest. Although the fermentation process had been used extensively throughout history prior to the origin of Pasteur's prevailing theories, the underlying biological and chemical processes were not fully understood. In the contemporary, fermentation is used in the production of various alcoholic beverages, foodstuffs, and medications. Overview of fermentation Fermentation is the anaerobic metabolic process that converts sugar into acids, gases, or alcohols in oxygen starved environments. Yeast and many other microbes commonly use fermentation to carry out anaerobic respiration necessary for survival. Even the human body carries out fermentation processes from time to time, such as during long-distance running; lactic acid will build up in muscles over the course of long-term exertion. Within the human body, lactic acid is the by-product of ATP-producing fermentation, which produces energy so the body can continue to exercise in situations where oxygen intake cannot be processed fast enough. Although fermentation yields less ATP than aerobic respiration, it can occur at a much higher rate. Fermentation has been used by humans consciously since around 5000 BCE, evidenced by jars recovered in the Iran Zagros Mountains area containing remnants of microbes similar those present in the wine-making process. History Prior to Pasteur's research on fermentation, there existed some preliminary competing notions of it. One scientist who had a substantial degree of influence on the theory of fermentation was Justus von Liebig. Liebig believed that fermentation was largely a process of decomposition as a consequence of the exposure of yeast to air and water. This theory was corroborated by Liebig's observation that other decomposing matter, such as rotten plant and animal parts, interacted with sugar in a similar manner as yeast. That is, the decomposition of albuminous matter (i.e. water-soluble proteins) caused sugar to transform to alcohol. Liebig held this view until his death in 1873. A different theory was supported by Charles Cagniard de la Tour and cell theorist Theodor Schwann, who claimed that alcoholic fermentation depended on the biological processes carried out by brewer's yeast. Louis Pasteur's interest in fermentation began when he noticed some remarkable properties of amyl alcohol—a by-product of lactic acid and alcohol fermentation—during his biochemical studies. In particular, Pasteur noted its ability to “rotate the plane of polarized light”, and its “unsymmetric arrangement of atoms." These behaviors were characteristic of organic compounds Pasteur had previously examined, but also presented a hurdle to his own research about a "law of hemihedral correlation". Pasteur had previously been attempting to derive connections between substances' chemical structures and external shape, and the optically active amyl alcohol did not follow his expectations according to the proposed 'law'. Pasteur sought a reason for why there happened to be this exception, and why such a chemical compound was generated during the fermentation process in the first place. In a series of lectures later in 1860, Pasteur attempted to link optical activity and molecular asymmetry to organic origins of substances, asserting that no chemical processes were capable of converting symmetric substances (inorganic) into asymmetric ones (organic). Hence, the amyl alcohol observation provided some of the first motivations for a biological explanation of fermentation. In 1856, Pasteur was able to observe the microbes responsible for alcoholic fermentation under a microscope, as a professor of science in the University of Lille. According to a legend originating in the 1900 biography of Pasteur, one of his chemistry students—an owner of a beetroot alcohol factory in Lille—sought aid from him after an unsuccessful year of brewing. Pasteur performed experiments at the factory in observation of the fermentation process, noticing that yeast globules became elongated after lactic acid was formed, but round and full when alcohol was fermenting correctly. In a different observation, Pasteur inspected particles originating on grapevines under the microscope and revealed the presence of living cells. Leaving these cells immersed in grape juice resulted in active alcoholic fermentation. This observation provided evidence for ending the distinction between ‘artificial’ fermentation in wine and ‘true’ fermentation in yeast products. The previous incorrect distinction had stemmed in part from the fact that yeast had to be added to beer wort in order to provoke desired alcoholic fermentation, while the fermenting catalysts for wine occurred naturally on grapevines; the fermentation of wine had been viewed as 'artificial' since it did not require additional catalyst, but the natural catalyst had been present on the grapevine itself. These observations provided Pasteur with a working hypothesis for future experiments. One of the chemical processes that Pasteur studied was the fermentation of sugar into lactic acid, as occurs in the souring of milk. In an 1857 experiment, Pasteur was able to isolate microorganisms present in lactic acid ferment after the chemical process had taken place. Pasteur then cultivated the microorganisms in a culture with his laboratory. He was then able to accelerate the lactic acid fermentation process in fresh milk by administering the cultivated sample to it. This was an important step in proving his hypothesis that lactic acid fermentation was catalyzed by microorganisms. Pasteur also experimented with the mechanisms of brewer's yeast in the absence of organic nitrogen. By adding pure brewer's yeast to a solution of cane sugar, ammonium salt, and yeast ash, Pasteur was able to observe the alcoholic fermentation process with all of its usual byproducts: glycerin, succinic acid, and small amounts of cellulose and fatty matters. However, if any of the ingredients were removed from the solution, no fermentation would occur. To Pasteur, this was proof that yeast required the nitrogen, minerals, and carbon from the medium for its metabolic processes, releasing carbonic acid and ethyl alcohol as byproducts. This also disproved Liebig's theory, since there was no albuminous matter present in the medium; the decomposition of the yeast was not the driving force for the observed fermentation. Pasteur on spontaneous generation Before the 1860s and 1870s—when Pasteur published his work on this theory—it was believed that microorganisms and even some small animals such as frogs would spontaneously generate. Spontaneous generation was historically explained in a variety of ways. Aristotle, an ancient Greek philosopher, theorized that creatures appeared out of certain concoctions of earthly elements, such as clay or mud mixing with water and sunlight. Later on, Felix Pouchet argued for the existence of 'plastic forces' within plant and animal debris capable of spontaneously generating eggs, and new organisms were born from these eggs. On top of this, a common piece of evidence that seemed to corroborate the theory was the appearance of maggots on raw meat after it was left exposed to open air. In the 1860s and 1870s, Pasteur's interest in spontaneous generation led him to criticize Pouchet's theories and conduct experiments of his own. In his first experiment, he took boiled sugared yeast-water and sealed it in an airtight contraption. Feeding hot, sterile air into the mixture left it unaltered, while introducing atmospheric dust resulted in microbes and mold appearing within the mixture. This result was also strengthened by the fact that Pasteur used asbestos, a form of totally inorganic matter, to carry the atmospheric dust. In a second experiment, Pasteur used the same flasks and sugar-yeast mixture, but left it idle in 'swan-neck' flasks instead of introducing any extraneous matter. Some flasks were kept open to the common air as the control group, and these exhibited mold and microbial growths within a day or two. When the swan-neck flasks failed to show these same microbial growths, Pasteur concluded that the structure of the necks blocked the passage of atmospheric dust into the solution. From the two experiments, Pasteur concluded that the atmospheric dust carried germs responsible for the 'spontaneous generation' in his broths. Thus, Pasteur's work provided proof that the emergent growth of bacteria in nutrient broths is caused by biogenesis rather than some form of spontaneous generation. Applications Today, the process of fermentation is used for a multitude of everyday applications including medication, beverages and food. Currently, companies like Genencor International uses the production of enzymes involved in fermentation to build a revenue of over $400 million a year. Many medications such as antibiotics are produced by the fermentation process. An example is the important drug cortisone, which can be prepared by the fermentation of a plant steroid known as diosgenin. The enzymes used in the reaction are provided by the mold Rhizopus nigricans. Just as it is commonly known, alcohol of all types and brands are also produced by way of fermentation and distillation. Moonshine is a classic example of how this is carried out. Finally, foods such as yogurt are made by fermentation processes as well. Yogurt is a fermented milk product that contains the characteristic bacterial cultures Lactobacillus bulgaricus and Streptococcus thermopiles. See also Cellular respiration Distillation Fermentation in food processing Louis Pasteur Spontaneous generation Zymotic diseases (for the Greek language term zumoun for "ferment") References Obsolete medical theories Microbiology History of science Biology theories Metabolism Louis Pasteur
Fermentation theory
[ "Chemistry", "Technology", "Biology" ]
2,109
[ "History of science", "Microbiology", "Biology theories", "Cellular processes", "Metabolism", "Microscopy", "Biochemistry", "History of science and technology" ]
59,623
https://en.wikipedia.org/wiki/Endomorphism%20ring
In mathematics, the endomorphisms of an abelian group X form a ring. This ring is called the endomorphism ring of X, denoted by End(X); the set of all homomorphisms of X into itself. Addition of endomorphisms arises naturally in a pointwise manner and multiplication via endomorphism composition. Using these operations, the set of endomorphisms of an abelian group forms a (unital) ring, with the zero map as additive identity and the identity map as multiplicative identity. The functions involved are restricted to what is defined as a homomorphism in the context, which depends upon the category of the object under consideration. The endomorphism ring consequently encodes several internal properties of the object. As the endomorphism ring is often an algebra over some ring R, this may also be called the endomorphism algebra. An abelian group is the same thing as a module over the ring of integers, which is the initial object in the category of rings. In a similar fashion, if R is any commutative ring, the endomorphisms of an R-module form an algebra over R by the same axioms and derivation. In particular, if R is a field, its modules M are vector spaces and the endomorphism ring of each is an algebra over the field R. Description Let be an abelian group and we consider the group homomorphisms from A into A. Then addition of two such homomorphisms may be defined pointwise to produce another group homomorphism. Explicitly, given two such homomorphisms f and g, the sum of f and g is the homomorphism . Under this operation End(A) is an abelian group. With the additional operation of composition of homomorphisms, End(A) is a ring with multiplicative identity. This composition is explicitly . The multiplicative identity is the identity homomorphism on A. The additive inverses are the pointwise inverses. If the set A does not form an abelian group, then the above construction is not necessarily well-defined, as then the sum of two homomorphisms need not be a homomorphism. However, the closure of the set of endomorphisms under the above operations is a canonical example of a near-ring that is not a ring. Properties Endomorphism rings always have additive and multiplicative identities, respectively the zero map and identity map. Endomorphism rings are associative, but typically non-commutative. If a module is simple, then its endomorphism ring is a division ring (this is sometimes called Schur's lemma). A module is indecomposable if and only if its endomorphism ring does not contain any non-trivial idempotent elements. If the module is an injective module, then indecomposability is equivalent to the endomorphism ring being a local ring. For a semisimple module, the endomorphism ring is a von Neumann regular ring. The endomorphism ring of a nonzero right uniserial module has either one or two maximal right ideals. If the module is Artinian, Noetherian, projective or injective, then the endomorphism ring has a unique maximal ideal, so that it is a local ring. The endomorphism ring of an Artinian uniform module is a local ring. The endomorphism ring of a module with finite composition length is a semiprimary ring. The endomorphism ring of a continuous module or discrete module is a clean ring. If an R module is finitely generated and projective (that is, a progenerator), then the endomorphism ring of the module and R share all Morita invariant properties. A fundamental result of Morita theory is that all rings equivalent to R arise as endomorphism rings of progenerators. Examples In the category of R-modules, the endomorphism ring of an R-module M will only use the R-module homomorphisms, which are typically a proper subset of the abelian group homomorphisms. When M is a finitely generated projective module, the endomorphism ring is central to Morita equivalence of module categories. For any abelian group , , since any matrix in carries a natural homomorphism structure of as follows: One can use this isomorphism to construct many non-commutative endomorphism rings. For example: , since . Also, when is a field, there is a canonical isomorphism , so , that is, the endomorphism ring of a -vector space is identified with the ring of n-by-n matrices with entries in . More generally, the endomorphism algebra of the free module is naturally -by- matrices with entries in the ring . As a particular example of the last point, for any ring R with unity, , where the elements of R act on R by left multiplication. In general, endomorphism rings can be defined for the objects of any preadditive category. Notes References A handbook for study and research Ring theory Module theory Category theory
Endomorphism ring
[ "Mathematics" ]
1,083
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Ring theory", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Module theory" ]
59,656
https://en.wikipedia.org/wiki/Rayleigh%20number
In fluid mechanics, the Rayleigh number (, after Lord Rayleigh) for a fluid is a dimensionless number associated with buoyancy-driven flow, also known as free (or natural) convection. It characterises the fluid's flow regime: a value in a certain lower range denotes laminar flow; a value in a higher range, turbulent flow. Below a certain critical value, there is no fluid motion and heat transfer is by conduction rather than convection. For most engineering purposes, the Rayleigh number is large, somewhere around 106 to 108. The Rayleigh number is defined as the product of the Grashof number (), which describes the relationship between buoyancy and viscosity within a fluid, and the Prandtl number (), which describes the relationship between momentum diffusivity and thermal diffusivity: . Hence it may also be viewed as the ratio of buoyancy and viscosity forces multiplied by the ratio of momentum and thermal diffusivities: . It is closely related to the Nusselt number (). Derivation The Rayleigh number describes the behaviour of fluids (such as water or air) when the mass density of the fluid is non-uniform. The mass density differences are usually caused by temperature differences. Typically a fluid expands and becomes less dense as it is heated. Gravity causes denser parts of the fluid to sink, which is called convection. Lord Rayleigh studied the case of Rayleigh-Bénard convection. When the Rayleigh number, Ra, is below a critical value for a fluid, there is no flow and heat transfer is purely by conduction; when it exceeds that value, heat is transferred by natural convection. When the mass density difference is caused by temperature difference, Ra is, by definition, the ratio of the time scale for diffusive thermal transport to the time scale for convective thermal transport at speed : This means the Rayleigh number is a type of Péclet number. For a volume of fluid of size in all three dimensions and mass density difference , the force due to gravity is of the order , where is acceleration due to gravity. From the Stokes equation, when the volume of fluid is sinking, viscous drag is of the order , where is the dynamic viscosity of the fluid. When these two forces are equated, the speed . Thus the time scale for transport via flow is . The time scale for thermal diffusion across a distance is , where is the thermal diffusivity. Thus the Rayleigh number Ra is where we approximated the density difference for a fluid of average mass density , thermal expansion coefficient and a temperature difference across distance . The Rayleigh number can be written as the product of the Grashof number and the Prandtl number: Classical definition For free convection near a vertical wall, the Rayleigh number is defined as: where: x is the characteristic length Rax is the Rayleigh number for characteristic length x g is acceleration due to gravity β is the thermal expansion coefficient (equals to 1/T, for ideal gases, where T is absolute temperature). is the kinematic viscosity α is the thermal diffusivity Ts is the surface temperature T∞ is the quiescent temperature (fluid temperature far from the surface of the object) Grx is the Grashof number for characteristic length x Pr is the Prandtl number In the above, the fluid properties Pr, ν, α and β are evaluated at the film temperature, which is defined as: For a uniform wall heating flux, the modified Rayleigh number is defined as: where: q″o is the uniform surface heat flux k is the thermal conductivity. Other applications Solidifying alloys The Rayleigh number can also be used as a criterion to predict convectional instabilities, such as A-segregates, in the mushy zone of a solidifying alloy. The mushy zone Rayleigh number is defined as: where: K is the mean permeability (of the initial portion of the mush) L is the characteristic length scale α is the thermal diffusivity ν is the kinematic viscosity R is the solidification or isotherm speed. A-segregates are predicted to form when the Rayleigh number exceeds a certain critical value. This critical value is independent of the composition of the alloy, and this is the main advantage of the Rayleigh number criterion over other criteria for prediction of convectional instabilities, such as Suzuki criterion. Torabi Rad et al. showed that for steel alloys the critical Rayleigh number is 17. Pickering et al. explored Torabi Rad's criterion, and further verified its effectiveness. Critical Rayleigh numbers for lead–tin and nickel-based super-alloys were also developed. Porous media The Rayleigh number above is for convection in a bulk fluid such as air or water, but convection can also occur when the fluid is inside and fills a porous medium, such as porous rock saturated with water. Then the Rayleigh number, sometimes called the Rayleigh-Darcy number, is different. In a bulk fluid, i.e., not in a porous medium, from the Stokes equation, the falling speed of a domain of size of liquid . In porous medium, this expression is replaced by that from Darcy's law , with the permeability of the porous medium. The Rayleigh or Rayleigh-Darcy number is then This also applies to A-segregates, in the mushy zone of a solidifying alloy. Geophysical applications In geophysics, the Rayleigh number is of fundamental importance: it indicates the presence and strength of convection within a fluid body such as the Earth's mantle. The mantle is a solid that behaves as a fluid over geological time scales. The Rayleigh number for the Earth's mantle due to internal heating alone, RaH, is given by: where: H is the rate of radiogenic heat production per unit mass η is the dynamic viscosity k is the thermal conductivity D is the depth of the mantle. A Rayleigh number for bottom heating of the mantle from the core, RaT, can also be defined as: where: ΔTsa is the superadiabatic temperature difference (the superadiabatic temperature difference is the actual temperature difference minus the temperature difference in a fluid whose entropy gradient is zero, but has the same profile of the other variables appearing in the equation of state) between the reference mantle temperature and the core–mantle boundary CP is the specific heat capacity at constant pressure. High values for the Earth's mantle indicates that convection within the Earth is vigorous and time-varying, and that convection is responsible for almost all the heat transported from the deep interior to the surface. See also Grashof number Prandtl number Reynolds number Péclet number Nusselt number Rayleigh–Bénard convection Notes References External links Rayleigh number calculator Convection Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics
Rayleigh number
[ "Physics", "Chemistry", "Engineering" ]
1,446
[ "Transport phenomena", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Dimensionless numbers of thermodynamics", "Chemical engineering", "Convection", "Thermodynamics", "Piping", "Fluid dynamics" ]
59,660
https://en.wikipedia.org/wiki/Integumentary%20system
The integumentary system is the set of organs forming the outermost layer of an animal's body. It comprises the skin and its appendages, which act as a physical barrier between the external environment and the internal environment that it serves to protect and maintain the body of the animal. Mainly it is the body's outer skin. The integumentary system includes skin, hair, scales, feathers, hooves, claws, and nails. It has a variety of additional functions: it may serve to maintain water balance, protect the deeper tissues, excrete wastes, and regulate body temperature, and is the attachment site for sensory receptors which detect pain, sensation, pressure, and temperature. Structure Skin The skin is one of the largest organs of the body. In humans, it accounts for about 12 to 15 percent of total body weight and covers 1.5 to 2 m2 of surface area. The skin (integument) is a composite organ, made up of at least two major layers of tissue: the epidermis and the dermis. The epidermis is the outermost layer, providing the initial barrier to the external environment. It is separated from the dermis by the basement membrane (basal lamina and reticular lamina). The epidermis contains melanocytes and gives color to the skin. The deepest layer of the epidermis also contains nerve endings. Beneath this, the dermis comprises two sections, the papillary and reticular layers, and contains connective tissues, vessels, glands, follicles, hair roots, sensory nerve endings, and muscular tissue. Between the integument and the deep body musculature there is a transitional subcutaneous zone made up of very loose connective and adipose tissue, the hypodermis. Substantial collagen bundles anchor the dermis to the hypodermis in a way that permits most areas of the skin to move freely over the deeper tissue layers. Epidermis The epidermis is the strong, superficial layer that serves as the first line of protection against the outer environment. The human epidermis is composed of stratified squamous epithelial cells, which further break down into four to five layers: the stratum corneum, stratum granulosum, stratum spinosum and stratum basale. Where the skin is thicker, such as in the palms and soles, there is an extra layer of skin between the stratum corneum and the stratum granulosum, called the stratum lucidum. The epidermis is regenerated from the stem cells found in the basal layer that develop into the corneum. The epidermis itself is devoid of blood supply and draws its nutrition from its underlying dermis. Its main functions are protection, absorption of nutrients, and homeostasis. In structure, it consists of a keratinized stratified squamous epithelium; four types of cells: keratinocytes, melanocytes, Merkel cells, and Langerhans cells. The predominant cell keratinocyte, which produces keratin, a fibrous protein that aids in skin protection, is responsible for the formation of the epidermal water barrier by making and secreting lipids. The majority of the skin on the human body is keratinized, with the exception of the lining of mucous membranes, such as the inside of the mouth. Non-keratinized cells allow water to "stay" atop the structure. The protein keratin stiffens epidermal tissue to form fingernails. Nails grow from a thin area called the nail matrix at an average of 1 mm per week. The lunula is the crescent-shape area at the base of the nail, lighter in color as it mixes with matrix cells. Only primates have nails. In other vertebrates, the keratinizing system at the terminus of each digit produces claws or hooves. The epidermis of vertebrates is surrounded by two kinds of coverings, which are produced by the epidermis itself. In fish and aquatic amphibians, it is a thin mucus layer that is constantly being replaced. In terrestrial vertebrates, it is the stratum corneum (dead keratinized cells). The epidermis is, to some degree, glandular in all vertebrates, but more so in fish and amphibians. Multicellular epidermal glands penetrate the dermis, where they are surrounded by blood capillaries that provide nutrients and, in the case of endocrine glands, transport their products. Dermis The dermis is the underlying connective tissue layer that supports the epidermis. It is composed of dense irregular connective tissue and areolar connective tissue such as a collagen with elastin arranged in a diffusely bundled and woven pattern. The dermis has two layers: the papillary dermis and the reticular layer. The papillary layer is the superficial layer that forms finger-like projections into the epidermis (dermal papillae), and consists of highly vascularized, loose connective tissue. The reticular layer is the deep layer of the dermis and consists of the dense irregular connective tissue. These layers serve to give elasticity to the integument, allowing stretching and conferring flexibility, while also resisting distortions, wrinkling, and sagging. The dermal layer provides a site for the endings of blood vessels and nerves. Many chromatophores are also stored in this layer, as are the bases of integumental structures such as hair, feathers, and glands. Hypodermis The hypodermis, otherwise known as the subcutaneous layer, is a layer beneath the skin. It invaginates into the dermis and is attached to the latter, immediately above it, by collagen and elastin fibers. It is essentially composed of a type of cell known as adipocytes, which are specialized in accumulating and storing fats. These cells are grouped together in lobules separated by connective tissue. The hypodermis acts as an energy reserve. The fats contained in the adipocytes can be put back into circulation, via the venous route, during intense effort or when there is a lack of energy-providing substances, and are then transformed into energy. The hypodermis participates, passively at least, in thermoregulation since fat is a heat insulator. Functions The integumentary system has multiple roles in maintaining the body's equilibrium. All body systems work in an interconnected manner to maintain the internal conditions essential to the function of the body. The skin has an important job of protecting the body and acts as the body's first line of defense against infection, temperature change, and other challenges to homeostasis. Its main functions include: Protect the body's internal living tissues and organs Protect against invasion by infectious organisms Protect the body from dehydration Protect the body against abrupt changes in temperature, maintain homeostasis Help excrete waste materials through perspiration Act as a receptor for touch, pressure, pain, heat, and cold (see Somatosensory system) Protect the body against sunburns by secreting melanin Generate vitamin D through exposure to ultraviolet light Store water, fat, glucose, vitamin D Maintenance of the body form Formation of new cells from stratum germinativum to repair minor injuries Protect from UV rays. Regulates body temperature It distinguishes, separates, and protects the organism from its surroundings. Small-bodied invertebrates of aquatic or continually moist habitats respire using the outer layer (integument). This gas exchange system, where gases simply diffuse into and out of the interstitial fluid, is called integumentary exchange. Clinical significance Possible diseases and injuries to the human integumentary system include: Rash Yeast Athlete's foot Infection Sunburn Skin cancer Albinism Acne Herpes Herpes labialis, commonly called cold sores Impetigo Rubella Cancer Psoriasis Rabies Rosacea Atopic dermatitis Eczema References External links Organ systems
Integumentary system
[ "Biology" ]
1,713
[ "Organ systems", "Integumentary system" ]
59,701
https://en.wikipedia.org/wiki/Broom
A broom (also known as a broomstick) is a cleaning tool consisting of usually stiff fibers (often made of materials such as plastic, hair, or corn husks) attached to, and roughly parallel to, a cylindrical handle, the broomstick. It is thus a variety of brush with a long handle. It is commonly used in combination with a dustpan. A distinction is made between a "hard broom" and a "soft broom" and a spectrum in between. Soft brooms are used in some cultures chiefly for sweeping walls of cobwebs and spiders, like a "feather duster", while hard brooms are for rougher tasks like sweeping dirt off sidewalks or concrete floors, or even smoothing and texturing wet concrete. The majority of brooms are somewhere in between, suitable for sweeping the floors of homes and businesses, soft enough to be flexible and to move even light dust, but stiff enough to achieve a firm sweeping action. The broom is also a symbolic object associated with witchcraft and ceremonial magic. Etymology The word broom derives from types of shrubs referred to as brooms. Common broom typically refers to whatever shrub is most commonly used to make the bristles for a broomstick in a given region. The name of the shrubs began to be used for the household implement in Late Middle English and gradually replaced the earlier besom during the Early Modern English period. The song Buy Broom Buzzems (by William Purvis 1752–1832) still refers to the "broom besom" as one type of besom (i.e. "a besom made from broom"). Flat brooms, made of broom corn, were invented by Shakers in the 19th century with the invention of the broom vice. A smaller whisk broom or brush is sometimes called a duster. Function Brooms are used to clean dust and ash. They may be used to clean homes, appliances such as ovens and fireplaces, or outdoor areas such as streets and yards. History The earliest brooms and brushes are from prehistory, when things such as bird wings and burs were fastened to handles of bone, ivory, or wood. The indigenous peoples of the Southwestern United States created brooms from yucca plants for cleaning pueblos. The indigenous people of Saint Lucia created brooms from coconut fronds for cleaning around hearths. Brooms are mentioned in the 1540 manuscript Codex Mendoza of the Aztecs, which instructs girls to sweep. The birch besom was made by fastening twigs to a handle with a strip of ash wood, harvested from a log after washing it in a running stream. The besom became a symbol of breweries in England, where brewers used it as a whisk while fermenting alcoholic beverages, and the brooms were typically displayed by pubs. When not in use, a brewer's besom was stored and dried on wall pegs or hanging by a leather cord. The broom was not washed so that yeast would remain in the bristles for future uses. Hearth besoms were created in Ireland to keep ash on a hearth. Until the 18th century, brooms were crafted by hand. In 1797, the quality of brooms changed when Levi Dickenson, a farmer in Hadley, Massachusetts, made a broom for his wife, using the tassels of sorghum, a grain he was growing for the seeds. His wife spread good words around town, creating demand for Dickenson's sorghum brooms. The sorghum brooms held up well, but ultimately, like all brooms, fell apart. Dickenson subsequently invented a machine that would make better brooms, and faster than he could. In 1810, the foot treadle broom machine was invented. This machine played an integral part in the Industrial Revolution. The Shakers began growing broom corn to create brooms in the present-day United States, which they crafted on treadle wheels and stored hanging on the wall under a cotton hood. The Shaker Theodore Bates invented the flat broom in 1798. Benjamin Franklin grew French broom, a practice which was then taken up by Thomas Jefferson, who had broomsticks made from the plant. Americans commonly kept brooms with their fireplaces by the early 19th century. At this time, brooms were often made by children, the disabled, the elderly, and slaves. By the middle of the century, brooms were created in factories with machine presses, trimmers, and winding machines and then sold door-to-door. People in the American frontier crafted brooms with a wet rawhide fastening, which dried and hardened around the bristles. Henry Hadley invented a hybridized machine-harvested broom corn at the University of Illinois in 1983 for more efficient creation of brooms. Modern factory-made brooms are made with straw bristles, which are flattened and stitched together before a handle is inserted. In industrialized countries, brooms are sometimes replaced or superseded by powered cleaning instruments such as leaf blowers and vacuum cleaners. Brooms remain commonly used for cleaning purposes in the 21st century. One source mentions that the United States had 303 broom factories by 1839 and that the number peaked at 1,039 in 1919. Most of these were in the Eastern United States; during the Great Depression in the 1930s, the number of factories declined to 320 in 1939. The state of Oklahoma became a major center for broom production because broom corn grew especially well there, with The Oklahoma Broom Corn Company opening a factory in El Reno in 1906. Faced with competition from imported brooms and synthetic bristles, most of the factories closed by the 1960s. Design and types A broom is made up of two parts: the handle, which is a long cylindrical stick, and the stiff fibers lined parallel at its base. The United States International Cooperation Administration made a distinction between brooms based on bristle quality. Parlor brooms are made of smooth green fibers and typically have brushes 14 to 18 inches long. Carpet brooms are a cheaper variant of the parlor broom that uses bristles rejected for use in parlor brooms for being off-color or lower quality. Standard brooms use bristles that were deemed too low-quality for either parlor brooms or carpet brooms, often dyed green to emulate other brooms. Hearth brooms, or toy brooms, are made of miscellaneous fibers that cannot be used in other brooms. They are not typically sold as consumer products. Warehouse brooms use heavier fibers such as rattan or palmyra palm and are bound with metal. Different grades of warehouse broom are used to denote the surface it is designed for, such as smelters, decks, or railroads. Their brushes measure about 16 to 18 inches long. Cob brooms are used to clean webs from high areas and were historically made with round brushes. Whisk brooms use bristles that are shorter and finer than other brooms. Rubber brooms were created in the early 20th century to prevent the debris raised when sweeping with straw brooms. Materials and production Brush The brush of a broom is most commonly made with the fibers of broom corn. Other common plant materials used in brooms include palmyra, rice straw, rice root, piassava, grass, sedge, and twigs. They may use a mix of materials, with lower quality fibers filling out the brush. Broom making involves botanical knowledge, particularly about broom plants. For manufactured brooms, the fibers are sorted by quality and fitted into the appropriate type of broom. They are then put through an evener to align the fibers, a saw to remove stems, and a scraper to break open the straw and remove the seeds. The fibers are dyed or bleached to achieve a uniform color, or they are wetted if they are already high quality so they can be more easily wound. The outer fibers of the brush are typically treated with a dye, called broom crystals, to preserve the color after use. As an alternative to plant fibers, brooms can be fitted with synthetic brushes made of materials like nylon or plastic. Handle and fastening Wooden broom handles are commonly made from hardwood or fir. Commercial wood broom handles are painted or finished. Lacquers can increase the lifespan of the broom's handle in addition to serving an aesthetic purpose. Wooden broom handles are often about 42 inches long and seven-eighths to one and one-eighth inches in diameter. Metal tension wires, sometimes crafted specifically for use in brooms, are put through a winding machine to fasten the bristles to the handle. The wire is wound through a hole in the handle before fastening the brush, typically over the last six inches of the handle. Additional bristles are added to the sides for a flat brush shape and to provide a surface for sweeping. The stem ends of the fibers are then cut and tapered and the wire is nailed into the handle. The wire is then finished by one of several methods, such as with a metal cap, with a velvet coat, or by being tapered. After the broom is wired, the fibers can again be scraped or seeded. Twine, often made of cotton or linen, is used to stitch the brush. At least five stitches will typically be used. The outside of the brush may be wrapped with a material like leather, replacing a twine band used to hold the brush together during manufacturing. Commercially sold brooms may apply a glued label to the fastening with the brand name or broom model, which can be used as a cover for the clamp marks left by a wiring machine. Magic In the context of witchcraft, broomstick is likely to refer to the broom as a whole, known as a besom. The first known reference to witches flying on broomsticks dates to the 11th-century Islamic traditionalist theologian Ibn Qudamahin his book al-Mughnī ( The Persuader ). The first reference to witches flying on broomsticks in Europe dates to 1453, confessed by the male witch Guillaume Edelin. The concept of a flying ointment used by witches appears at about the same time, recorded in 1456. In Metro-Goldwyn-Mayer's 1939 film, The Wizard of Oz, the Wicked Witch of the West used a broomstick to fly over Oz. She also used it to skywrite "Surrender Dorothy" above the Emerald City. The Wizard commands Dorothy and her three traveling companions to bring the Wicked Witch's broomstick to him in order to grant their wishes. Dorothy carries it to the Wizard with the Scarecrow, Tin Man, and Lion after the Wicked Witch's death. In Disney's 1940 film Fantasia, Mickey Mouse, playing The Sorcerer's Apprentice, brings a broom to life to do his chore of filling a well full of water. The broom overdoes its job and when chopped into pieces, each splinter becomes a new broom that flood the room until Yen Sid stops them. This story comes from a poem by Goethe called Der Zauberlehrling ("The Sorcerer's Apprentice"). The Disney brooms have had recurring cameos in Disney media, mostly portrayed as janitors, albeit not out of control or causing chaos such as in the original appearance. This flight was also in Bedknobs and Broomsticks as well as Hocus Pocus. In Eswatini (Swaziland), witches' broomsticks are short bundles of sticks tied together without a handle. Flying brooms play an important role in the fantasy world of Harry Potter, used for transportation as well as for playing the popular airborne game of Quidditch. Flying brooms, along with Flying carpets, are the main means of transportation in the world of Poul Anderson's Operation Chaos. The Flying Broom () is a feminist organization in Turkey, deliberately evoking the associations of a Flying Broom with witches. Culture Brooms are used in some rituals. Jumping the broom is a tradition sometimes practiced in African American weddings in which the couple leaps over a broom to symbolically represent the leap into domestic life. The tradition was practiced by enslaved Americans and other groups of low social class in the United States through the 19th century. It was revitalized by Alex Haley after it was prominently featured in his novel Roots: The Saga of an American Family in 1976 and became part of a broader reclamation of Black heritage at the time. Other marginalized groups, such as the Celts and the Romani, have historically been described as practicing similar traditions in Britain. The precise origin of jumping the broom is uncertain. The Métis people of Canada have a broom dancing tradition. There are broom dancing exhibitions where people show off their broom dancing skills. The lively broom dance involves fast footwork and jumping. During World War II, American submarine crews would tie a broom to their boat's conning tower when returning to port to indicate that they had "swept" the seas clean of enemy shipping. The tradition has been devalued in recent years by submarine crews who fly a broom simply when returning from their boat's shake-down cruise. This tradition may stem from the action of the Dutch admiral Maarten Tromp who tied a broom to his main mast after defeating the British admiral Robert Blake at the Battle of Dungeness in 1652. This has often been interpreted as a message that he would "sweep the British from the seas". This story remains unsubstantiated, but may have its origin in the tradition of hoisting a broom as a sign that a ship was for sale, which seems more likely as Tromp had captured two of Blake's ships in the battle. In Bhojpuri, it is called Baṛhanī (prosperer), as it is believed that it's prospers the family and house. Literature In 1701 Jonathan Swift wrote a "Meditation Upon a Broomstick", a parody of Robert Boyle's Occasional Reflections upon Several Subjects: In J.K. Rowling's Harry Potter novels and film adaptations, broomsticks are a common form of transport for wizards and witches. These are also used for the magical sport of Quidditch, in which players use their broomsticks to fly around a field and shoot goals. Politics For much of the 20th century, political cartoons and propaganda would often depict new or oncoming leaders sweeping away old, corrupt or unpopular figures. The broom is used as a symbol of the following political parties: Aam Aadmi Party, India All Progressives Congress, Nigeria Religion In Jainism, monks and nuns have a little broom with them, in order to gently brush aside ants and small animals, to avoid crushing them. This is part of observing the principle of Ahinsā. The Shakers are often credited with the invention of the flat broom. Sports Curling broom In baseball and basketball, when the home team is close to accomplishing a sweep (having won the first two games of a three-game series or first three games of a four-game series), some fans will bring brooms to the ballpark and brandish them as a way of taunting the visiting team (examples: Arkansas vs. LSU, 2011; Red Sox vs. Yankees, May 13–15, 2011 and June 7–9, 2011). In broomball, broomsticks have their heads removed and are used to push a ball into a goal, on an ice surface. The game is similar to hockey, except players do not wear skates. Image gallery See also Bath broom Besom Mop Squeegee Notes References External links Articles containing video clips Cleaning tools Domestic implements Magic items
Broom
[ "Physics" ]
3,166
[ "Magic items", "Physical objects", "Matter" ]
59,703
https://en.wikipedia.org/wiki/Toshiba
is a Japanese multinational electronics company headquartered in Minato, Tokyo. Its diversified products and services include power, industrial and social infrastructure systems, elevators and escalators, electronic components, semiconductors, hard disk drives, printers, batteries, lighting, as well as IT solutions such as quantum cryptography. It was formerly also one of the biggest manufacturers of personal computers, consumer electronics, home appliances, and medical equipment. The Toshiba name is derived from its former name, Tokyo Shibaura Denki K.K. which in turn was a 1939 merger between Shibaura Seisaku-sho (founded in 1875) and Tokyo Denki (founded in 1890). The company name was officially changed to Toshiba Corporation in 1978. A technology company with a long history and sprawling businesses, Toshiba is a household name in Japan and has long been viewed as a symbol of the country's technological prowess post-World War II. As a semiconductor company and the inventor of flash memory, Toshiba had been one of the top 10 in the chip industry until its flash memory unit was spun off as Kioxia in the late 2010s. The company was also relevant in consumer personal computers, releasing the first mass-market laptop in 1985 and later ranking as a major vendor of laptops; it exited the PC business in 2020 having divested it into Dynabook Inc. Toshiba faced trouble during the 2010s amid a much-publicised accounting scandal that affected its reputation, and the bankruptcy of its subsidiary nuclear energy company Westinghouse in 2017. This forced the conglomerate to shed a number of underperforming businesses, essentially eliminating the company's century-long presence in consumer markets. After a rejection to split the company, in 2023 Toshiba was purchased by a consortium led by Japan Industrial Partners (JIP); Toshiba turned private as a result and was delisted from the Tokyo Stock Exchange after 74 years, where it was formerly a constituent of the Nikkei 225 and TOPIX 100 indices. History Tanaka Seisakusho was the first company established by Tanaka Hisashige (1799–1881), one of the most original and productive inventor-engineers during the Tokugawa / Edo period. Established on 11 July 1875, it was the first Japanese company to manufacture telegraph equipment. It also manufactured switches, and miscellaneous electrical and communications equipment. The company was inherited by Tanaka's adopted son, and later became half of the present Toshiba company. Several people who worked at Tanaka Seisakusho or who received Tanaka's guidance at a Kubusho (Ministry of Industries) factory later became pioneers themselves. These included who helped make the first power generator in Japan and to establish a company, Hakunetsusha to make bulbs; Oki Kibatarō, the founder of the present Oki Denki (Oki Electric Industry); and Ishiguro Keizaburō, a co-founder of the present Anritsu. After the demise of the founder in 1881, Tanaka Seisakusho became partly owned by General Electric and expanded into the production of torpedoes and mines at the request of the Imperial Japanese Navy, to become one of the largest manufacturing companies of the time. However, as the Navy started to use competitive bids and then build its own works, the demand decreased substantially and the company started to lose money. The main creditor to the company, Mitsui Bank, took over the insolvent company in 1893 and renamed it Shibaura Seisakusho (Shibaura Engineering Works). Shibaura Seisakusho was the new name given to Tanaka Seisakusho after it was declared insolvent in 1893 and taken over by Mitsui Bank. In 1910, it formed a tie-up with General Electric (GE), which, in exchange for technology, acquired about a quarter of the shares of Shibaura. The relation with GE continued until the beginning of World War II and resumed in 1953 with GE's 24 percent shareholding in the successor company, Tokyo Shibaura Denki. This percentage decreased substantially since then. Hakunetsusha (Tokyo Denki) was a company established by Miyoshi Shōichi and , two of Japan's industrial pioneers during the Tokugawa / Edo period. It specialized in the manufacturing of lightbulbs. The company was established in 1890 and started out by selling bulbs using bamboo filaments. However, following the opening up of trade with the West through the Unequal treaty, Hakunetsusha met with fierce competition from imports. Its bulb cost about 60 percent more than the imports and the quality was poorer. The company managed to survive with the booms after the First Sino-Japanese War of 1894–95 and the Russo-Japanese War of 1904–05, but afterward its financial position was precarious. In 1905, the company was renamed Tokyo Denki (Tokyo Electric) and entered into a financial and technological collaboration with General Electric of the US. General Electric acquired 51 percent share of ownership, sent a vice president, and provided the technology for bulb-making. Production equipment was bought from GE and Tokyo Denki soon started selling its products with GE's trademark. 1939 to 2000 Toshiba was founded in 1939 by the merger of Shibaura Seisakusho and Tokyo Denki. The merger of Shibaura and Tokyo Denki created a new company called Tokyo Shibaura Denki (Tokyo Shibaura Electric) (). It was soon nicknamed Toshiba, but it was not until 1978 that the company was officially renamed Toshiba Corporation. The company was listed on the Tokyo Stock Exchange in May 1949. The group expanded rapidly, driven by a combination of organic growth and by acquisitions, buying heavy engineering, and primary industry firms in the 1940s and 1950s. Groups created include Toshiba Music Industries/Toshiba EMI (1960), Toshiba International Corporation (the 1970s), Toshiba Electrical Equipment (1974), Toshiba Chemical (1974), Toshiba Lighting and Technology (1989), Toshiba America Information Systems (1989) and Toshiba Carrier Corporation (1999). The first mini-split ductless air conditioner was sold in 1961 by Toshiba in Japan. Toshiba is responsible for a number of Japanese firsts, including radar (1912), the TAC digital computer (1954), transistor television, color CRTs and microwave oven (1959), color video phone (1971), Japanese word processor (1978), MRI system (1982), personal computer Pasopia (1981), laptop personal computer (1986), NAND EEPROM (1991), DVD (1995), the Libretto sub-notebook personal computer (1996) and HD DVD (2005). In 1977, Toshiba acquired the Brazilian company Semp (Sociedade Eletromercantil Paulista), subsequently forming Semp Toshiba through the combination of the two companies' South American operations. In 1987, Toshiba Machine, a subsidiary of Toshiba, was accused of illegally selling CNC milling machines used to produce very quiet submarine propellers to the Soviet Union in violation of the CoCom agreement, an international embargo on certain countries to COMECON countries. The Toshiba-Kongsberg scandal involved a subsidiary of Toshiba and the Norwegian company Kongsberg Vaapenfabrikk. The incident strained relations between the United States and Japan, and resulted in the arrest and prosecution of two senior executives, as well as the imposition of sanctions on the company by both countries. Senator John Heinz of Pennsylvania said "What Toshiba and Kongsberg did was ransom the security of the United States for $517 million." 2000 to 2010 In 2001, Toshiba signed a contract with Orion Electric, one of the world's largest OEM consumer video electronic makers and suppliers, to manufacture and supply finished consumer TV and video products for Toshiba to meet the increasing demand for the North American market. The contract ended in 2008, ending seven years of OEM production with Orion. In December 2004, Toshiba quietly announced it would discontinue manufacturing traditional in-house cathode-ray tube (CRT) televisions. In 2005, Matsushita Toshiba Picture Display Co. Ltd. (a joint venture between Panasonic and Toshiba created in 2002) stopped production of CRTs at its factory in Horseheads, New York. A year later, in 2006, it stopped production at its Malaysian factory, following heavy losses. In 2006, Toshiba terminated sales of CRT TVs in Japan and production of in-house plasma TVs. To ensure its future competitiveness in the flat-panel digital television and display market, Toshiba has made a considerable investment in a new kind of display technology called SED. This technology, however, was never sold to the public, as it was not price-competitive with LCDs. Toshiba sold its share in SED Inc. to Canon after Nano-Proprietary, which owns several patents related to SED technology, claimed SED Inc. was not a subsidiary of Canon. Before World War II, Toshiba was a member of the Mitsui Group zaibatsu (family-controlled vertical monopoly). Today Toshiba is a member of the Mitsui keiretsu (a set of companies with interlocking business relationships and shareholdings), and still has preferential arrangements with Mitsui Bank and the other members of the keiretsu. Membership in a keiretsu has traditionally meant loyalty, both corporate and private, to other members of the keiretsu or allied keiretsu. This loyalty can extend as far as the beer the employees consume, which in Toshiba's case is Asahi. In July 2005, BNFL confirmed it planned to sell Westinghouse Electric Company, then estimated to be worth $1.8 billion (£1 billion). The bid attracted interest from several companies including Toshiba, General Electric and Mitsubishi Heavy Industries and when the Financial Times reported on 23 January 2006 that Toshiba had won the bid, it valued the company's offer at $5 billion (£2.8 billion). The sale of Westinghouse by the Government of the United Kingdom surprised many industry experts, who questioned the wisdom of selling one of the world's largest producers of nuclear reactors shortly before the market for nuclear power was expected to grow substantially; China, the United States and the United Kingdom were all expected to invest heavily in nuclear power. The acquisition of Westinghouse for $5.4 billion was completed on 17 October 2006, with Toshiba obtaining a 77 percent share, and partners The Shaw Group a 20 percent share and Ishikawajima-Harima Heavy Industries Co. Ltd. a 3 percent share. In late 2007, Toshiba took over from Discover Card as the sponsor of the top-most screen of One Times Square in New York City. It displays the iconic 60-second New Year's countdown on its screen, as well as messages, greetings, and advertisements for the company. The sponsor of the New Year's countdown was taken over by Capital One on 31 December 2018. In January 2009, Toshiba acquired the HDD business of Fujitsu. 2010 to 2014 Toshiba announced on 16 May 2011, that it had agreed to acquire all of the shares of the Swiss-based advanced-power-meter maker Landis+Gyr for $2.3 billion. In 2010 the company released a series of television models including the WL768, YL863, VL963 designed in collaboration with Danish designer Timothy Jacob Jensen. In April 2012, Toshiba agreed to acquire IBM's point-of-sale business for $850 million, making it the world's largest vendor of point-of-sale systems. In July 2012, Toshiba was accused of fixing the prices of LCD panels in the United States at a high level. While such claims were denied by Toshiba, they agreed to settle alongside several other manufacturers for a total of $571 million. In December 2013, Toshiba completed its acquisition of Vijai Electricals Limited plant at Hyderabad and set up its own base for manufacturing of transmission and distribution products (transformers and switchgears) under the Social Infrastructure Group in India as Toshiba Transmission & Distribution Systems (India) Private Limited. In January 2014, Toshiba completed its acquisition of OCZ Storage Solutions. OCZ Technology stock was halted on 27 November 2013. OCZ then stated they expected to file a petition for bankruptcy and that Toshiba Corporation had expressed interest in purchasing its assets in a bankruptcy proceeding. On 2 December 2013, OCZ announced Toshiba had agreed to purchase nearly all of OCZ's assets for $35 million. The deal was completed on 21 January 2014 when the assets of OCZ Technology Group became a new independently operated subsidiary of Toshiba named OCZ Storage Solutions. OCZ Technology Group then changed its name to ZCO Liquidating Corporation; on 18 August 2014, ZCO Liquidating Corporation and its subsidiaries were liquidated. OCZ Storage Solutions was dissolved on 1 April 2016 and absorbed into Toshiba America Electronic Components, Inc., with OCZ becoming a brand of Toshiba. In March 2014, Toshiba sued SK Hynix, accusing the company of stealing technology of its NAND flash memory. In the late same year, the two companies settled with a deal in which SK Hynix pays US$278 million to Toshiba. Toshiba had sued Hynix in the early 2000s for patent infringement. In October 2014, Toshiba and United Technologies agreed a deal to expand their joint venture outside Japan. 2015 accounting scandal Toshiba first announced in May 2015 that it was investigating an accounting scandal and it might have to revise its profits for the previous three years. On 21 July 2015, CEO Hisao Tanaka announced his resignation amid an accounting scandal that he called "the most damaging event for our brand in the company's 140-year history". Profits had been inflated by $1.2 billion over the previous seven years. Eight other senior officials also resigned, including the two previous CEOs. Chairman Masashi Muromachi was appointed acting CEO. Following the scandal, Toshiba Corp. was removed from a stock index showcasing Japan's best companies. That was the second reshuffle of the index, which picks companies with the best operating income, return on equity and market value. Toshiba announced in early 2015 that they would stop making televisions in its own factories. From 2015 onward, Toshiba televisions will be made by Compal for the U.S., or by Vestel and other manufacturers for the European market. In September 2015, Toshiba shares fell to their lowest point in two and a half years. The firm said in a statement that its net losses for the quarterly period were 12.3 billion yen ($102m; £66m). The company noted poor performances in its televisions, home appliances and personal computer businesses. In October 2015, Toshiba sold the image sensor business to Sony. In December 2015, Muromachi said the episode had wiped about $8 billion off Toshiba's market value. He forecast a record 550 billion yen (about US$4.6 billion) annual loss and warned the company would have to overhaul its TV and computer businesses. Toshiba would not be raising funds for two years, he said. The next week, a company spokesperson announced Toshiba would seek 300 billion yen ($2.5 billion) in 2016, taking the company's indebtedness to more than 1 trillion yen (about $8.3 billion). In January 2016, Toshiba's security division unveiled a new bundle of services for schools that use its surveillance equipment. The program, which is intended for both K-12 and higher education, includes education discounts, alerts, and post-warranty support, among other features, on its IP-based security gear. In March 2016, Toshiba was preparing to start construction on a cutting-edge new semiconductor plant in Japan that would mass-produce chips based on the ultra-dense flash variant. Toshiba expected to spend approximately 360 billion yen, or $3.2 billion, on the project through May 2019. In April 2016, Toshiba recalled 100,000 faulty laptop lithium-ion batteries, which were made by Panasonic, that can overheat, posing burn and fire hazards to consumers, according to the U.S. Consumer Product Safety Commission. Toshiba first announced the recall in January and said it was recalling the batteries in certain Toshiba Notebook computers sold since June 2011. In May 2016, it was announced that Satoshi Tsunakawa, the former head of Toshiba's medical equipment division, was named CEO. This appointment came after the accounting scandal that occurred. In September 2016, Toshiba announced the first wireless power receiver IC using the Qi 1.2.2 specification, developed in association with the Wireless Power Consortium. In December 2016, Toshiba Medical Systems Corporation was acquired by Canon. A Chinese electrical appliance corporation Midea Group bought a controlling 80.1% stake in the Toshiba Home Appliances Group. 2017 US nuclear construction liabilities In late December 2016, the management of Toshiba requested an "urgent press briefing" to announce that the newly-found losses in the Westinghouse subsidiary from Vogtle Electric Generating Plant nuclear plant construction would lead to a write-down of several billion dollars, bankrupting Westinghouse and threatening to bankrupt Toshiba. The exact amount of the liabilities was unavailable. In January 2017, a person with direct knowledge of the matter reported that the company plans on making its memory chip division a separate business, to save Toshiba from bankruptcy. In February 2017, Toshiba revealed unaudited details of a 390 billion yen ($3.4 billion) corporate wide loss, mainly arising from its majority owned US based Westinghouse nuclear construction subsidiary which was written down by 712 billion yen ($6.3 billion). On 14 February 2017, Toshiba delayed filing financial results, and chairman Shigenori Shiga, formerly chairman of Westinghouse, resigned. Construction delays, regulatory changes and cost overruns at Westinghouse-built nuclear facilities Vogtle units 3 and 4 in Waynesboro, Georgia and VC Summer units 2 and 3 in South Carolina, were cited as the main causes of the dramatic fall in Toshiba's financial performance and collapse in the share price. Fixed priced construction contracts negotiated by Westinghouse with Georgia Power left Toshiba with uncharted liabilities that resulted in the sale of key Toshiba operating subsidiaries to secure the company's future. Westinghouse filed for Chapter 11 bankruptcy protection on 29 March 2017. Toshiba was estimated to have 9 billion dollar annual net loss. On 11 April 2017, Toshiba filed unaudited quarterly results. Auditors PricewaterhouseCoopers had not signed of the accounts because of uncertainties at Westinghouse. Toshiba stated that "substantial doubt about the company's ability to continue as a going concern exists". On 25 April 2017, Toshiba announced its decision to replace its auditor after less than a year. Earlier in April, the company filed twice-delayed business results without an endorsement from auditor PricewaterhouseCoopers (PwC). On 20 September 2017, Toshiba's board approved a deal to sell its memory chip business to a group led by Bain Capital for US$18 billion, with financial backing by companies such as Apple, Dell Technologies, Hoya Corporation, Kingston Technology, Seagate Technology, and SK Hynix. The newly independent company was named Toshiba Memory Corporation, and then renamed Kioxia. On 15 November 2017, Hisense reached a deal to acquire 95% of Toshiba Visual Solutions (television sets) for US$113.6 million. Later that month, the company announced that it would pull out of its long-standing sponsorships of the Japanese television programs Sazae-san, Nichiyō Gekijo, and the video screens topping out One Times Square in New York City. The company cited that the value of these placements were reduced by its exit from consumer-oriented lines of business. On 6 April 2018, Toshiba announced the completion of the sale of Westinghouse's holding company to Brookfield Business Partners and some partners for $4.6 billion. Present and future In June 2018, Toshiba sold 80.1% of its Client Solutions (personal computers) business unit to Sharp for $36m, with an option allowing Sharp to buy the remaining 19.9% share. Sharp renamed the business to Dynabook, a brand name Toshiba had used in Japan, and started releasing products under that name. On 30 June 2020, Sharp exercised its option to acquire the remaining 19.9% percent of Dynabook shares from Toshiba. In May 2019, Toshiba announced that it would put non-Japanese investors on its board for the first time in nearly 80 years. In November, the company transferred its logistics service business to SBS Group. In January 2020, Toshiba unveiled its plan to launch quantum cryptography services by September the same year. It also announced a number of other technologies waiting for commercialization, including an affordable solid-state Lidar based on silicon photomultiplier, high-capacity hydrogen fuel cells, and a proprietary computer algorithm named Simulated Bifurcation Algorithm that mimics quantum computing, of which it plans to sell access to other parties such as financial institutions, social networking services, etc. The company claims the algorithm running on a desktop PC at room temperature environment is capable of surpassing the performance of similar algorithms running on existing supercomputers, even that of laser-based quantum computer when a specialized setting is given. It has been added to quantum computing services offered by major cloud platforms including Microsoft Azure. In October 2020, Toshiba made a decision to pull out of the system LSI business citing mounted losses while reportedly mulling on the sale of its semiconductor fabs as well. In April 2021, CVC Capital Partners made a takeover offer. On 12 November 2021, Toshiba announced that it would split into three separate companies. Two of the companies will respectively focus on infrastructure and electronic devices; the third, which will retain the Toshiba name, would manage the 40.6% stake in Kioxia and all other remaining assets. The company expected to complete the plan by March 2024 but the plan was challenged by stockholders, and at an extraordinary general meeting on 24 March 2022, they rejected the plan. They also rejected an alternative plan put forward by a large institutional investor that would have had the company search for buyers among private equity firms. Toshiba announced in February 2022 that it plans to split into two companies instead after the original proposal proved unpopular with shareholders. In March 2023, however, the company announced it had accepted a trillion (billion) buyout offer from a consortium of 20 companies, which was led by Japan Industrial Partners (JIP), a Tokyo-based private equity firm, and includes Orix, Chubu Electric Power, and Rohm. On September 27, after the public offering was completed in the middle of that month, it was reported that it would be transferred to a new parent company, TBJH. On 22 December 2023, it was announced that JIP's purchase of the company had been completed, two days after being delisted. This move brought the company back to Japan after it had been run by overseas activist investors. Operations As of 2012, Toshiba had 39 R&D facilities worldwide, which employed around 4,180 people, and was organized into four main business groupings: the Digital Products Group, the Electronic Devices Group, the Home Appliances Group and the Social Infrastructure Group. In the year ended 31 March 2012, Toshiba had total revenues of , of which 25.2 percent was generated by the Digital Products Group, 24.5 percent by the Electronic Devices Group, 8.7 percent by the Home Appliances Group, 36.6 percent by the Social Infrastructure Group and 5 percent by other activities. In the same year, 45 percent of Toshiba's sales were generated in Japan and 55 percent in the rest of the world. Toshiba invested a total of in R&D in the year ended 31 March 2012, equivalent to 5.2 percent of sales. Toshiba registered a total of 2,483 patents in the United States in 2011, the fifth-largest number of any company (after IBM, Samsung Electronics, Canon and Panasonic). Toshiba had around 141,256 employees as of 31 March 2018. Products, services, and standards Toshiba has had a range of products and services, including air conditioners, consumer electronics (including televisions and DVD and Blu-ray players), control systems (including air-traffic control systems, railway systems, security systems and traffic control systems), electronic point of sale equipment, elevators and escalators, home appliances (including refrigerators and washing machines), IT services, lighting, materials and electronic components, medical equipment (including CT and MRI scanners, ultrasound equipment and X-ray equipment), office equipment, business telecommunication equipment personal computers, semiconductors, power systems (including electricity turbines, fuel cells and nuclear reactors) power transmission and distribution systems, and TFT displays. HD DVD Toshiba had played a critical role in the development and proliferation of DVD. On 19 February 2008, Toshiba announced that it would be discontinuing its HD DVD storage format, the successor of DVD, following defeat in a format war against Blu-ray. The HD DVD format had failed after most of the major US film studios backed the Blu-ray format, which was developed by Sony, Panasonic, Philips and Pioneer Corporation. Conceding the abandonment of HD DVD, Toshiba's president, Atsutoshi Nishida said "We concluded that a swift decision would be best [and] if we had continued, that would have created problems for consumers, and we simply had no chance to win". Toshiba continued to supply retailers with machines until the end of March 2008, and continued to provide technical support to the estimated one million people worldwide who owned HD DVD players and recorders. Toshiba announced a new line of stand-alone Blu-ray players as well as drives for PCs and laptops, and subsequently joined the BDA, the industry body which oversees the development of the Blu-ray format. REGZA REGZA (Real Expression Guaranteed by Amazing Architecture) is a unified television brand owned and manufactured by Toshiba. In 2010 REGZA name disappeared from the North American market, and from March 2015 new TVs carrying the Toshiba name are designed and produced by Compal Electronics, a Taiwanese company, to which Toshiba has licensed its name. REGZA is also used in Android-based smartphones that were developed by Fujitsu Toshiba Mobile Communications. 3D television In October 2010, Toshiba unveiled the Toshiba Regza GL1 21" LED-backlit LCD TV glasses-free 3D prototype at CEATEC 2010. This system supports 3D capability without glasses (utilizing an integral imaging system of 9 parallax images with a vertical lenticular sheet). The retail product was released in December 2010. 4K Ultra HD televisions 4K Ultra HD (3840×2160p) televisions provides four times the resolution of 1080p Full HD televisions. Toshiba's 4K HD LED televisions are powered by a CEVO 4K Quad + dual-core processor. Personal computers In 1985, Toshiba released the T1100, the world's first commercially accepted laptop PC. Toshiba designed and developed PCs, predominantly laptops, under several product lines including Satellite, Portégé, Libretto, Qosmio and Tecra. Toshiba initialized process of divestment of the personal computer and laptop business, Toshiba Client Solutions, in 2018 with sale of 80.1% of shares to Sharp Corporation. Eventually Toshiba fully exited from the personal computing market in June 2020, transferring the remaining 19.9% shares in Toshiba Client Solutions (since being renamed to Dynabook Inc.) to Sharp. Toshiba's divested personal computing business adopted the Dynabook name after a computer concept targeted for children and after one of its product lines. Flash memory In the 1980s, a Toshiba team led by Fujio Masuoka invented flash memory, both NOR and NAND types. In March 2015, Toshiba announced the development of the first 48-layer, three-dimensional flash memory. The new flash memory is based on a vertical stacking technology that Toshiba calls BiCS (Bit Cost Scaling), stores two bits of data per transistor, and can store 128Gbits (16GB) per chip. This allowed flash memory to keep scaling up the capacity as Moore's Law was considered to be obsolete. Toshiba's memory division was spun off as Toshiba Memory Corporation, now Kioxia. Environmental record Toshiba has been judged as making "low" efforts to lessen its impact on the environment. In November 2012, they came second from the bottom in Greenpeace's 18th edition of the Guide to Greener Electronics that ranks electronics companies according to their policies on products, energy, and sustainable operations. Toshiba received 2.3 of a possible 10 points, with the top company (WIPRO) receiving 7.1 points. "Zero" scores were received in the categories "Clean energy policy advocacy", "Use of recycled plastics in products" and "Policy and practice on sustainable sourcing of fibres for paper". In 2010, Toshiba reported that all of its new LCD TVs comply with the Energy Star standards and 34 models exceed the requirements by 30% or more. Toshiba also partnered with China's Tsinghua University in 2008 in order to form a research facility to focus on energy conservation and the environment. The new Toshiba Energy and Environment Research Center is located in Beijing where forty students from the university will work to research electric power equipment and new technologies that will help stop the global warming process. Through this partnership, Toshiba hopes to develop products that will better protect the environment and save China. This contract between Tsinghua University and Toshiba originally began in October 2007 when they signed an agreement on joint energy and environment research. The projects that they conduct work to reduce car pollution and to create power systems that don't negatively affect the environment. On 28 December 1970 Toshiba began the construction of unit 3 of the Fukushima Daiichi Nuclear Power Plant which was damaged in the Fukushima I nuclear accidents on 14 March 2011. In April 2011, CEO Norio Sasaki declared nuclear energy would "remain as a strong option" even after the Fukushima I nuclear accidents. In late 2013, Toshiba (Japan) entered the solar power business in Germany, installing PV systems on apartment buildings. See also List of Toshiba subsidiaries Footnotes References External links Japanese companies established in 1875 Conglomerate companies based in Tokyo Accounting scandals Electronics companies established in 1875 Companies formerly listed on the Tokyo Stock Exchange Computer companies of Japan Computer hardware companies Computer memory companies Computer storage companies Consumer battery manufacturers Consumer electronics brands Defense companies of Japan Defunct computer systems companies Display technology companies Electric transformer manufacturers Electrical engineering companies of Japan Elevator manufacturers Escalator manufacturers Home appliance manufacturers of Japan Heating, ventilation, and air conditioning companies Japanese brands Lighting brands Locomotive manufacturers of Japan Technology companies established in 1875 Medical device manufacturers Medical technology companies of Japan Mitsui Multinational companies headquartered in Japan Netbook manufacturers Nuclear technology companies of Japan Point of sale companies Robotics companies of Japan Scandals in Japan Semiconductor companies of Japan Video equipment manufacturers State-owned film companies Electric motor manufacturers Engine manufacturers of Japan Radio manufacturers 1940s initial public offerings Electronics companies of Japan 2023 mergers and acquisitions
Toshiba
[ "Technology", "Engineering" ]
6,642
[ "Computer hardware companies", "Computers", "Radio manufacturers", "Radio electronics" ]
59,715
https://en.wikipedia.org/wiki/Scientific%20notation
Scientific notation is a way of expressing numbers that are too large or too small to be conveniently written in decimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to as scientific form or standard index form, or standard form in the United Kingdom. This base ten notation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators, it is usually known as "SCI" display mode. In scientific notation, nonzero numbers are written in the form or m times ten raised to the power of n, where n is an integer, and the coefficient m is a nonzero real number (usually between 1 and 10 in absolute value, and nearly always written as a terminating decimal). The integer n is called the exponent and the real number m is called the significand or mantissa. The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of the fractional part of the common logarithm. If the number is negative then a minus sign precedes m, as in ordinary decimal notation. In normalized notation, the exponent is chosen so that the absolute value (modulus) of the significand m is at least 1 but less than 10. Decimal floating point is a computer arithmetic system closely related to scientific notation. History Styles Normalized notation Any real number can be written in the form in many ways: for example, 350 can be written as or or . In normalized scientific notation (called "standard form" in the United Kingdom), the exponent n is chosen so that the absolute value of m remains at least one but less than ten (). Thus 350 is written as . This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number of orders of magnitude separating the numbers. It is also the form that is required when using tables of common logarithms. In normalized notation, the exponent n is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as ). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value of m for all elements of the series. Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation – although the latter term is more general and also applies when m is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (for example, ). Engineering notation Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponent n is restricted to multiples of 3. Consequently, the absolute value of m is in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, can be read as "twelve-point-five nanometres" and written as , while its scientific notation equivalent would likely be read out as "one-point-two-five times ten-to-the-negative-eight metres". E notation Calculators and computer programs typically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Because superscript exponents like 107 can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notation for a decimal significand m and integer exponent n means the same as . For example is written as or , and is written as or . While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides. Most popular programming languages – including Fortran, C/C++, Python, and JavaScript – use this "E" notation, which comes from Fortran and was present in the first version released for the IBM 704 in 1956. The E notation was already used by the developers of SHARE Operating System (SOS) for the IBM 709 in 1958. Later versions of Fortran (at least since FORTRAN IV as of 1961) also use "D" to signify double precision numbers in scientific notation, and newer Fortran compilers use "Q" to signify quadruple precision. The MATLAB programming language supports the use of either "E" or "D". The ALGOL 60 (1960) programming language uses a subscript ten "10" character instead of the letter "E", for example: 6.0221023. This presented a challenge for computer systems which did not provide such a character, so ALGOL W (1966) replaced the symbol by a single quote, e.g. 6.022'+23, and some Soviet ALGOL variants allowed the use of the Cyrillic letter "ю", e.g. . Subsequently, the ALGOL 68 programming language provided a choice of characters: , , , , or 10. The ALGOL "10" character was included in the Soviet GOST 10859 text encoding (1964), and was added to Unicode 5.2 (2009) as . Some programming languages use other symbols. For instance, Simula uses (or for long), as in . Mathematica supports the shorthand notation (reserving the letter for the mathematical constant e). The first pocket calculators supporting scientific notation appeared in 1972. To enter numbers in scientific notation calculators include a button labeled "EXP" or "×10x", among other variants. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g. 6.022 23, as seen in the HP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g. 6.022 23, as seen in the Commodore PR100). In 1976, Hewlett-Packard calculator user Jim Davidson coined the term decapower for the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example, ); these gained some currency in the programmable calculator user community. The letters "E" or "D" were used as a scientific-notation separator by Sharp pocket computers released between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers. The Texas Instruments TI-83 and TI-84 series of calculators (1996–present) use a small capital E for the separator. In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103 would be written as "6.022③". Significant figures A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroes indicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures. When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus would become if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as or . Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous. Estimated final digits It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together). Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of the proton can properly be expressed as , which is shorthand for . However it is still unclear whether the error ( in this case) is the maximum possible error, standard error, or some other confidence interval. Use of spaces In normalized scientific notation, in E notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed only before and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character. Further examples of scientific notation An electron's mass is about . In scientific notation, this is written . The Earth's mass is about . In scientific notation, this is written . The Earth's circumference is approximately . In scientific notation, this is . In engineering notation, this is written . In SI writing style, this may be written (). An inch is defined as exactly . Using scientific notation, this value can be uniformly expressed to any desired precision, from the nearest tenth of a millimeter to the nearest nanometer , or beyond. Hyperinflation means that too much money is put into circulation, perhaps by printing banknotes, chasing too few goods. It is sometimes defined as inflation of 50% or more in a single month. In such conditions, money rapidly loses its value. Some countries have had events of inflation of 1 million percent or more in a single month, which usually results in the rapid abandonment of the currency. For example, in November 2008 the monthly inflation rate of the Zimbabwean dollar reached 79.6 billion percent (470% per day); the approximate value with three significant figures would be %, or more simply a rate of . Converting numbers Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed. Decimal to scientific First, move the decimal separator point sufficient places, n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append × 10n; to the right, × 10−n. To represent the number in normalized scientific notation, the decimal separator would be moved 6 digits to the left and × 106 appended, resulting in . The number would have its decimal separator shifted 3 digits to the right instead of the left and yield as a result. Scientific to decimal Converting a number from scientific notation to decimal notation, first remove the × 10n on the end, then shift the decimal separator n digits to the right (positive n) or left (negative n). The number would have its decimal separator shifted 6 digits to the right and become , while would have its decimal separator moved 3 digits to the left and be . Exponential Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shifted x places to the left (or right) and x is added to (or subtracted from) the exponent, as shown below. Basic operations Given two numbers in scientific notation, and Multiplication and division are performed using the rules for operation with exponentiation: and Some examples are: and Addition and subtraction require the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted: Next, add or subtract the significands: An example: Other bases While base ten is normally used for scientific notation, powers of other bases can be used too, base 2 being the next most commonly used one. For example, in base-2 scientific notation, the number 1001b in binary (=9d) is written as or using binary numbers (or shorter if binary context is obvious). In E notation, this is written as (or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E", a shorthand notation originally proposed by Bruce Alan Martin of Brookhaven National Laboratory in 1968, as in (or shorter: 1.001B11). For comparison, the same number in decimal representation: (using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes or shorter 1.001B3. This is closely related to the base-2 floating-point representation commonly used in computer arithmetic, and the usage of IEC binary prefixes (e.g. 1B10 for 1×210 (kibi), 1B20 for 1×220 (mebi), 1B30 for 1×230 (gibi), 1B40 for 1×240 (tebi)). Similar to "B" (or "b"), the letters "H" (or "h") and "O" (or "o", or "C") are sometimes also used to indicate times 16 or 8 to the power as in 1.25 = = 1.40H0 = 1.40h0, or 98000 = = 2.7732o5 = 2.7732C5. Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal. This notation can be produced by implementations of the printf family of functions following the C99 specification and (Single Unix Specification) IEEE Std 1003.1 POSIX standard, when using the %a or %A conversion specifiers. Starting with C++11, C++ I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard since C++17. Apple's Swift supports it as well. It is also required by the IEEE 754-2008 binary floating-point standard. Example: 1.3DEp42 represents . Engineering notation can be viewed as a base-1000 scientific notation. See also Positional notation ISO/IEC 80000 – an international standard which guides the use of physical quantities and units of measurement in science Suzhou numerals – a Chinese numeral system formerly used in commerce, with order of magnitude written below the significand RKM code – a notation to specify resistor and capacitor values, with symbols for powers of 1000 References External links Decimal to Scientific Notation Converter Scientific Notation to Decimal Converter Scientific Notation in Everyday Life An exercise in converting to and from scientific notation Scientific Notation Converter Scientific Notation chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series. Mathematical notation Measurement Numeral systems
Scientific notation
[ "Physics", "Mathematics" ]
3,472
[ "Physical quantities", "Quantity", "Mathematical objects", "Measurement", "Size", "Numeral systems", "nan", "Numbers" ]
59,718
https://en.wikipedia.org/wiki/Identity%20matrix
In linear algebra, the identity matrix of size is the square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1. Terminology and notation The identity matrix is often denoted by , or simply by if the size is immaterial or can be trivially determined by the context. The term unit matrix has also been widely used, but the term identity matrix is now standard. The term unit matrix is ambiguous, because it is also used for a matrix of ones and for any unit of the ring of all matrices. In some fields, such as group theory or quantum mechanics, the identity matrix is sometimes denoted by a boldface one, , or called "id" (short for identity). Less frequently, some mathematics books use or to represent the identity matrix, standing for "unit matrix" and the German word respectively. In terms of a notation that is sometimes used to concisely describe diagonal matrices, the identity matrix can be written as The identity matrix can also be written using the Kronecker delta notation: Properties When is an matrix, it is a property of matrix multiplication that In particular, the identity matrix serves as the multiplicative identity of the matrix ring of all matrices, and as the identity element of the general linear group , which consists of all invertible matrices under the matrix multiplication operation. In particular, the identity matrix is invertible. It is an involutory matrix, equal to its own inverse. In this group, two square matrices have the identity matrix as their product exactly when they are the inverses of each other. When matrices are used to represent linear transformations from an -dimensional vector space to itself, the identity matrix represents the identity function, for whatever basis was used in this representation. The th column of an identity matrix is the unit vector , a vector whose th entry is 1 and 0 elsewhere. The determinant of the identity matrix is 1, and its trace is . The identity matrix is the only idempotent matrix with non-zero determinant. That is, it is the only matrix such that: When multiplied by itself, the result is itself All of its rows and columns are linearly independent. The principal square root of an identity matrix is itself, and this is its only positive-definite square root. However, every identity matrix with at least two rows and columns has an infinitude of symmetric square roots. The rank of an identity matrix equals the size , i.e.: See also Binary matrix (zero-one matrix) Elementary matrix Exchange matrix Matrix of ones Pauli matrices (the identity matrix is the zeroth Pauli matrix) Householder transformation (the Householder matrix is built through the identity matrix) Square root of a 2 by 2 identity matrix Unitary matrix Zero matrix Notes Matrices 1 (number) Sparse matrices
Identity matrix
[ "Mathematics" ]
604
[ "Matrices (mathematics)", "Sparse matrices", "Mathematical objects", "Combinatorics" ]
59,733
https://en.wikipedia.org/wiki/Hexagon
In geometry, a hexagon (from Greek , , meaning "six", and , , meaning "corner, angle") is a six-sided polygon. The total of the internal angles of any simple (non-self-intersecting) hexagon is 720°. Regular hexagon A regular hexagon has Schläfli symbol {6} and can also be constructed as a truncated equilateral triangle, t{3}, which alternates two types of edges. A regular hexagon is defined as a hexagon that is both equilateral and equiangular. It is bicentric, meaning that it is both cyclic (has a circumscribed circle) and tangential (has an inscribed circle). The common length of the sides equals the radius of the circumscribed circle or circumcircle, which equals times the apothem (radius of the inscribed circle). All internal angles are 120 degrees. A regular hexagon has six rotational symmetries (rotational symmetry of order six) and six reflection symmetries (six lines of symmetry), making up the dihedral group D6. The longest diagonals of a regular hexagon, connecting diametrically opposite vertices, are twice the length of one side. From this it can be seen that a triangle with a vertex at the center of the regular hexagon and sharing one side with the hexagon is equilateral, and that the regular hexagon can be partitioned into six equilateral triangles. Like squares and equilateral triangles, regular hexagons fit together without any gaps to tile the plane (three hexagons meeting at every vertex), and so are useful for constructing tessellations. The cells of a beehive honeycomb are hexagonal for this reason and because the shape makes efficient use of space and building materials. The Voronoi diagram of a regular triangular lattice is the honeycomb tessellation of hexagons. Parameters The maximal diameter (which corresponds to the long diagonal of the hexagon), D, is twice the maximal radius or circumradius, R, which equals the side length, t. The minimal diameter or the diameter of the inscribed circle (separation of parallel sides, flat-to-flat distance, short diagonal or height when resting on a flat base), d, is twice the minimal radius or inradius, r. The maxima and minima are related by the same factor:   and, similarly, The area of a regular hexagon For any regular polygon, the area can also be expressed in terms of the apothem a and the perimeter p. For the regular hexagon these are given by a = r, and p, so The regular hexagon fills the fraction of its circumscribed circle. If a regular hexagon has successive vertices A, B, C, D, E, F and if P is any point on the circumcircle between B and C, then . It follows from the ratio of circumradius to inradius that the height-to-width ratio of a regular hexagon is 1:1.1547005; that is, a hexagon with a long diagonal of 1.0000000 will have a distance of 0.8660254 or cos(30°) between parallel sides. Point in plane For an arbitrary point in the plane of a regular hexagon with circumradius , whose distances to the centroid of the regular hexagon and its six vertices are and respectively, we have If are the distances from the vertices of a regular hexagon to any point on its circumcircle, then Symmetry The regular hexagon has D6 symmetry. There are 16 subgroups. There are 8 up to isomorphism: itself (D6), 2 dihedral: (D3, D2), 4 cyclic: (Z6, Z3, Z2, Z1) and the trivial (e) These symmetries express nine distinct symmetries of a regular hexagon. John Conway labels these by a letter and group order. r12 is full symmetry, and a1 is no symmetry. p6, an isogonal hexagon constructed by three mirrors can alternate long and short edges, and d6, an isotoxal hexagon constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular hexagon. The i4 forms are regular hexagons flattened or stretched along one symmetry direction. It can be seen as an elongated rhombus, while d2 and p2 can be seen as horizontally and vertically elongated kites. g2 hexagons, with opposite sides parallel are also called hexagonal parallelogons. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g6 subgroup has no degrees of freedom but can be seen as directed edges. Hexagons of symmetry g2, i4, and r12, as parallelogons can tessellate the Euclidean plane by translation. Other hexagon shapes can tile the plane with different orientations. A2 and G2 groups The 6 roots of the simple Lie group A2, represented by a Dynkin diagram , are in a regular hexagonal pattern. The two simple roots have a 120° angle between them. The 12 roots of the Exceptional Lie group G2, represented by a Dynkin diagram are also in a hexagonal pattern. The two simple roots of two lengths have a 150° angle between them. Dissection Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. This decomposition of a regular hexagon is based on a Petrie polygon projection of a cube, with 3 of 6 square faces. Other parallelogons and projective directions of the cube are dissected within rectangular cuboids. Related polygons and tilings A regular hexagon has Schläfli symbol {6}. A regular hexagon is a part of the regular hexagonal tiling, {6,3}, with three hexagonal faces around each vertex. A regular hexagon can also be created as a truncated equilateral triangle, with Schläfli symbol t{3}. Seen with two types (colors) of edges, this form only has D3 symmetry. A truncated hexagon, t{6}, is a dodecagon, {12}, alternating two types (colors) of edges. An alternated hexagon, h{6}, is an equilateral triangle, {3}. A regular hexagon can be stellated with equilateral triangles on its edges, creating a hexagram. A regular hexagon can be dissected into six equilateral triangles by adding a center point. This pattern repeats within the regular triangular tiling. A regular hexagon can be extended into a regular dodecagon by adding alternating squares and equilateral triangles around it. This pattern repeats within the rhombitrihexagonal tiling. Self-crossing hexagons There are six self-crossing hexagons with the vertex arrangement of the regular hexagon: Hexagonal structures From bees' honeycombs to the Giant's Causeway, hexagonal patterns are prevalent in nature due to their efficiency. In a hexagonal grid each line is as short as it can possibly be if a large area is to be filled with the fewest hexagons. This means that honeycombs require less wax to construct and gain much strength under compression. Irregular hexagons with parallel opposite edges are called parallelogons and can also tile the plane by translation. In three dimensions, hexagonal prisms with parallel opposite faces are called parallelohedrons and these can tessellate 3-space by translation. Tesselations by hexagons In addition to the regular hexagon, which determines a unique tessellation of the plane, any irregular hexagon which satisfies the Conway criterion will tile the plane. Hexagon inscribed in a conic section Pascal's theorem (also known as the "Hexagrammum Mysticum Theorem") states that if an arbitrary hexagon is inscribed in any conic section, and pairs of opposite sides are extended until they meet, the three intersection points will lie on a straight line, the "Pascal line" of that configuration. Cyclic hexagon The Lemoine hexagon is a cyclic hexagon (one inscribed in a circle) with vertices given by the six intersections of the edges of a triangle and the three lines that are parallel to the edges that pass through its symmedian point. If the successive sides of a cyclic hexagon are a, b, c, d, e, f, then the three main diagonals intersect in a single point if and only if . If, for each side of a cyclic hexagon, the adjacent sides are extended to their intersection, forming a triangle exterior to the given side, then the segments connecting the circumcenters of opposite triangles are concurrent. If a hexagon has vertices on the circumcircle of an acute triangle at the six points (including three triangle vertices) where the extended altitudes of the triangle meet the circumcircle, then the area of the hexagon is twice the area of the triangle. Hexagon tangential to a conic section Let ABCDEF be a hexagon formed by six tangent lines of a conic section. Then Brianchon's theorem states that the three main diagonals AD, BE, and CF intersect at a single point. In a hexagon that is tangential to a circle and that has consecutive sides a, b, c, d, e, and f, Equilateral triangles on the sides of an arbitrary hexagon If an equilateral triangle is constructed externally on each side of any hexagon, then the midpoints of the segments connecting the centroids of opposite triangles form another equilateral triangle. Skew hexagon A skew hexagon is a skew polygon with six vertices and edges but not existing on the same plane. The interior of such a hexagon is not generally defined. A skew zig-zag hexagon has vertices alternating between two parallel planes. A regular skew hexagon is vertex-transitive with equal edge lengths. In three dimensions it will be a zig-zag skew hexagon and can be seen in the vertices and side edges of a triangular antiprism with the same D3d, [2+,6] symmetry, order 12. The cube and octahedron (same as triangular antiprism) have regular skew hexagons as petrie polygons. Petrie polygons The regular skew hexagon is the Petrie polygon for these higher dimensional regular, uniform and dual polyhedra and polytopes, shown in these skew orthogonal projections: Convex equilateral hexagon A principal diagonal of a hexagon is a diagonal which divides the hexagon into quadrilaterals. In any convex equilateral hexagon (one with all sides equal) with common side a, there exists a principal diagonal d1 such that and a principal diagonal d2 such that Polyhedra with hexagons There is no Platonic solid made of only regular hexagons, because the hexagons tessellate, not allowing the result to "fold up". The Archimedean solids with some hexagonal faces are the truncated tetrahedron, truncated octahedron, truncated icosahedron (of soccer ball and fullerene fame), truncated cuboctahedron and the truncated icosidodecahedron. These hexagons can be considered truncated triangles, with Coxeter diagrams of the form and . There are other symmetry polyhedra with stretched or flattened hexagons, like these Goldberg polyhedron G(2,0): There are also 9 Johnson solids with regular hexagons: Hexagon versus Sexagon The debate over whether hexagons should be referred to as "sexagons" has its roots in the etymology of the term. The prefix "hex-" originates from the Greek word "hex," meaning six, while "sex-" comes from the Latin "sex," also signifying six. Some linguists and mathematicians argue that since many English mathematical terms derive from Latin, the use of "sexagon" would align with this tradition. Historical discussions date back to the 19th century, when mathematicians began to standardize terminology in geometry. However, the term "hexagon" has prevailed in common usage and academic literature, solidifying its place in mathematical terminology despite the historical argument for "sexagon." The consensus remains that "hexagon" is the appropriate term, reflecting its Greek origins and established usage in mathematics. (see Numeral_prefix#Occurrences). Gallery of natural and artificial hexagons See also 24-cell: a four-dimensional figure which, like the hexagon, has orthoplex facets, is self-dual and tessellates Euclidean space Hexagonal crystal system Hexagonal number Hexagonal tiling: a regular tiling of hexagons in a plane Hexagram: six-sided star within a regular hexagon Unicursal hexagram: single path, six-sided star, within a hexagon Honeycomb conjecture Havannah: abstract board game played on a six-sided hexagonal grid Central place theory References External links Definition and properties of a hexagon with interactive animation and construction with compass and straightedge. An Introduction to Hexagonal Geometry on Hexnet a website devoted to hexagon mathematics. – an animated internet video about hexagons by CGP Grey. 6 (number) Constructible polygons Polygons by the number of sides Elementary shapes
Hexagon
[ "Mathematics" ]
3,017
[ "Constructible polygons", "Planes (geometry)", "Euclidean plane geometry" ]
59,735
https://en.wikipedia.org/wiki/Free%20group
In mathematics, the free group FS over a given set S consists of all words that can be built from members of S, considering two words to be different unless their equality follows from the group axioms (e.g. st = suu−1t but s ≠ t−1 for s,t,u ∈ S). The members of S are called generators of FS, and the number of generators is the rank of the free group. An arbitrary group G is called free if it is isomorphic to FS for some subset S of G, that is, if there is a subset S of G such that every element of G can be written in exactly one way as a product of finitely many elements of S and their inverses (disregarding trivial variations such as st = suu−1t). A related but different notion is a free abelian group; both notions are particular instances of a free object from universal algebra. As such, free groups are defined by their universal property. History Free groups first arose in the study of hyperbolic geometry, as examples of Fuchsian groups (discrete groups acting by isometries on the hyperbolic plane). In an 1882 paper, Walther von Dyck pointed out that these groups have the simplest possible presentations. The algebraic study of free groups was initiated by Jakob Nielsen in 1924, who gave them their name and established many of their basic properties. Max Dehn realized the connection with topology, and obtained the first proof of the full Nielsen–Schreier theorem. Otto Schreier published an algebraic proof of this result in 1927, and Kurt Reidemeister included a comprehensive treatment of free groups in his 1932 book on combinatorial topology. Later on in the 1930s, Wilhelm Magnus discovered the connection between the lower central series of free groups and free Lie algebras. Examples The group (Z,+) of integers is free of rank 1; a generating set is S = {1}. The integers are also a free abelian group, although all free groups of rank are non-abelian. A free group on a two-element set S occurs in the proof of the Banach–Tarski paradox and is described there. On the other hand, any nontrivial finite group cannot be free, since the elements of a free generating set of a free group have infinite order. In algebraic topology, the fundamental group of a bouquet of k circles (a set of k loops having only one point in common) is the free group on a set of k elements. Construction The free group FS with free generating set S can be constructed as follows. S is a set of symbols, and we suppose for every s in S there is a corresponding "inverse" symbol, s−1, in a set S−1. Let T = S ∪ S−1, and define a word in S to be any written product of elements of T. That is, a word in S is an element of the monoid generated by T. The empty word is the word with no symbols at all. For example, if S = {a, b, c}, then T = {a, a−1, b, b−1, c, c−1}, and is a word in S. If an element of S lies immediately next to its inverse, the word may be simplified by omitting the c, c−1 pair: A word that cannot be simplified further is called reduced. The free group FS is defined to be the group of all reduced words in S, with concatenation of words (followed by reduction if necessary) as group operation. The identity is the empty word. A reduced word is called cyclically reduced if its first and last letter are not inverse to each other. Every word is conjugate to a cyclically reduced word, and a cyclically reduced conjugate of a cyclically reduced word is a cyclic permutation of the letters in the word. For instance b−1abcb is not cyclically reduced, but is conjugate to abc, which is cyclically reduced. The only cyclically reduced conjugates of abc are abc, bca, and cab. Universal property The free group FS is the universal group generated by the set S. This can be formalized by the following universal property: given any function from S to a group G, there exists a unique homomorphism φ: FS → G making the following diagram commute (where the unnamed mapping denotes the inclusion from S into FS): That is, homomorphisms FS → G are in one-to-one correspondence with functions S → G. For a non-free group, the presence of relations would restrict the possible images of the generators under a homomorphism. To see how this relates to the constructive definition, think of the mapping from S to FS as sending each symbol to a word consisting of that symbol. To construct φ for the given , first note that φ sends the empty word to the identity of G and it has to agree with on the elements of S. For the remaining words (consisting of more than one symbol), φ can be uniquely extended, since it is a homomorphism, i.e., φ(ab) = φ(a) φ(b). The above property characterizes free groups up to isomorphism, and is sometimes used as an alternative definition. It is known as the universal property of free groups, and the generating set S is called a basis for FS. The basis for a free group is not uniquely determined. Being characterized by a universal property is the standard feature of free objects in universal algebra. In the language of category theory, the construction of the free group (similar to most constructions of free objects) is a functor from the category of sets to the category of groups. This functor is left adjoint to the forgetful functor from groups to sets. Facts and theorems Some properties of free groups follow readily from the definition: Any group G is the homomorphic image of some free group FS. Let S be a set of generators of G. The natural map φ: FS → G is an epimorphism, which proves the claim. Equivalently, G is isomorphic to a quotient group of some free group FS. If S can be chosen to be finite here, then G is called finitely generated. The kernel Ker(φ) is the set of all relations in the presentation of G; if Ker(φ) can be generated by the conjugates of finitely many elements of F, then G is finitely presented. If S has more than one element, then FS is not abelian, and in fact the center of FS is trivial (that is, consists only of the identity element). Two free groups FS and FT are isomorphic if and only if S and T have the same cardinality. This cardinality is called the rank of the free group F. Thus for every cardinal number k, there is, up to isomorphism, exactly one free group of rank k. A free group of finite rank n > 1 has an exponential growth rate of order 2n − 1. A few other related results are: The Nielsen–Schreier theorem: Every subgroup of a free group is free. Furthermore, if the free group F has rank n and the subgroup H has index e in F, then H is free of rank 1 + e(n–1). A free group of rank k clearly has subgroups of every rank less than k. Less obviously, a (nonabelian!) free group of rank at least 2 has subgroups of all countable ranks. The commutator subgroup of a free group of rank k > 1 has infinite rank; for example for F(a,b), it is freely generated by the commutators [am, bn] for non-zero m and n. The free group in two elements is SQ universal; the above follows as any SQ universal group has subgroups of all countable ranks. Any group that acts on a tree, freely and preserving the orientation, is a free group of countable rank (given by 1 plus the Euler characteristic of the quotient graph). The Cayley graph of a free group of finite rank, with respect to a free generating set, is a tree on which the group acts freely, preserving the orientation. As a topological space (a one-dimensional simplicial complex), this Cayley graph Γ(F) is contractible. For a finitely presented group G, the natural homomorphism defined above, φ : F → G, defines a covering map of Cayley graphs φ* : Γ(F) → Γ(G), in fact a universal covering. Hence, the fundamental group of the Cayley graph Γ(G) is isomorphic to the kernel of φ, the normal subgroup of relations among the generators of G. The extreme case is when G = {e}, the trivial group, considered with as many generators as F, all of them trivial; the Cayley graph Γ(G) is a bouquet of circles, and its fundamental group is F itself. Any subgroup of a free group, , corresponds to a covering space of the bouquet of circles, namely to the Schreier coset graph of F/H. This can be used to give a topological proof of the Nielsen-Schreier theorem above. The groupoid approach to these results, given in the work by P.J. Higgins below, is related to the use of covering spaces above. It allows more powerful results, for example on Grushko's theorem, and a normal form for the fundamental groupoid of a graph of groups. In this approach there is considerable use of free groupoids on a directed graph. Grushko's theorem has the consequence that if a subset B of a free group F on n elements generates F and has n elements, then B generates F freely. Free abelian group The free abelian group on a set S is defined via its universal property in the analogous way, with obvious modifications: Consider a pair (F, φ), where F is an abelian group and φ: S → F is a function. F is said to be the free abelian group on S with respect to φ if for any abelian group G and any function ψ: S → G, there exists a unique homomorphism f: F → G such that f(φ(s)) = ψ(s), for all s in S. The free abelian group on S can be explicitly identified as the free group F(S) modulo the subgroup generated by its commutators, [F(S), F(S)], i.e. its abelianisation. In other words, the free abelian group on S is the set of words that are distinguished only up to the order of letters. The rank of a free group can therefore also be defined as the rank of its abelianisation as a free abelian group. Tarski's problems Around 1945, Alfred Tarski asked whether the free groups on two or more generators have the same first-order theory, and whether this theory is decidable. answered the first question by showing that any two nonabelian free groups have the same first-order theory, and answered both questions, showing that this theory is decidable. A similar unsolved (as of 2011) question in free probability theory asks whether the von Neumann group algebras of any two non-abelian finitely generated free groups are isomorphic. See also Generating set of a group Presentation of a group Nielsen transformation, a factorization of elements of the automorphism group of a free group Normal form for free groups and free product of groups Free product Notes References W. Magnus, A. Karrass and D. Solitar, "Combinatorial Group Theory", Dover (1976). P.J. Higgins, 1971, "Categories and Groupoids", van Nostrand, {New York}. Reprints in Theory and Applications of Categories, 7 (2005) pp 1–195. Serre, Jean-Pierre, Trees, Springer (2003) (English translation of "arbres, amalgames, SL2", 3rd edition, astérisque 46 (1983)) P.J. Higgins, The fundamental groupoid of a graph of groups, Journal of the London Mathematical Society (2) 13 (1976), no. 1, 145–149. . . Geometric group theory Combinatorial group theory Free algebraic structures Properties of groups
Free group
[ "Physics", "Mathematics" ]
2,597
[ "Geometric group theory", "Mathematical structures", "Group actions", "Properties of groups", "Algebraic structures", "Category theory", "Free algebraic structures", "Symmetry" ]
59,785
https://en.wikipedia.org/wiki/IBM%20System/370
The IBM System/370 (S/370) is a range of IBM mainframe computers announced as the successors to the System/360 family on June 30, 1970. The series mostly maintains backward compatibility with the S/360, allowing an easy migration path for customers; this, plus improved performance, were the dominant themes of the product announcement. Early 370 systems differed from the 360 largely in their internal circuitry, moving from the Solid Logic Technology hybrid integrated circuits containing separate transistors to more modern monolithic integrated circuits containing multiple transistors per integrated circuit, which IBM referred to as Monolithic System Technology, or MST. The higher density packaging allowed several formerly optional features from the 360 line to be included as standard features of the machines, floating-point support for instance. The 370 also added a small number of new instructions. At the time of its introduction, the development of virtual memory systems had become a major theme in the computer market, and the 370 was considered highly controversial as it lacked this feature. This was addressed in 1972 with the System/370 Advanced Function and its associated dynamic address translation (DAT) hardware. All future machines in the lineup received this option, along with several new operating systems that supported it. Smaller additions were made throughout the lifetime of the line, which led to a profusion of models that were generally referred to by the processor number. One of the last major additions to the line in 1988 were the ESA/370 extensions that allowed a machine to have multiple virtual address spaces and easily switch among them. The 370 was IBM's primary large mainframe offering from the 1970s through the 1980s. In September 1990, the System/370 line was replaced with the System/390. The 390, which was based on a new ESA/390 model, expanded the multiple memory concept to include full hardware virtualization that allowed it to run multiple operating systems at the same time. Evolution The original System/370 line was announced on June 30, 1970, with first customer shipment of the Models 155 and 165 planned for February 1971 and April 1971 respectively. The 155 first shipped in January 1971. System/370 underwent several architectural improvements during its roughly 20-year lifetime. The following features mentioned in the 11th edition of the System/370 Principles of Operation are either optional on S/360 but standard on S/370, introduced with S/370 or added to S/370 after announcement. Branch and Save Channel Indirect Data Addressing Channel-Set Switching Clear I/O Command Retry Commercial Instruction Set Conditional Swapping CPU Timer and Clock Comparator Dual-Address Space (DAS) Extended-Precision Floating Point Extended Real Addressing External Signals Fast Release Floating Point Halt Device I/O Extended Logout Limited Channel Logout Move Inverse Multiprocessing PSW-Key Handling Recovery Extensions Segment Protection Service Signal Start-I/O-Fast Queuing (SIOF) Storage-Key-Instruction Extensions Storage-Key 4K-Byte Block Suspend and Resume Test Block Translation Vector 31-Bit IDAWs Initial models When the first System/370 machines, the Model 155 and the Model 165, were introduced, the System/370 architecture was described as an extension, but not a redesign, of IBM's System/360 architecture which was introduced in 1964. The System/370 architecture incorporated only a small number of changes to the System/360 architecture. These changes included: 13 new instructions, among which were MOVE LONG (MVCL); COMPARE LOGICAL LONG (CLCL); thereby permitting operations on up to 2^24-1 bytes (16 MB), vs. the 256-byte limits on the 360's MVC and CLC; SHIFT AND ROUND DECIMAL (SRP), which multiplied or divided a packed decimal value by a power of 10, rounding the result when dividing; optional 128-bit (hexadecimal) floating-point arithmetic, introduced in the System/360 Model 85 a new higher-resolution time-of-day clock support for the block multiplexer channel introduced in the System/360 Model 85. All of the emulator features were designed to run under the control of the standard operating systems. IBM documented the S/370 emulator programs as integrated emulators. These models had core memory and did not include support for virtual storage, as they lacked a DAT (Dynamic Address Translation) box Logic technology All models of the System/370 used IBM's form of monolithic integrated circuits called MST (Monolithic System Technology) making them third generation computers. MST provided System/370 with four to eight times the circuit density and over ten times the reliability when compared to the previous second generation SLT technology of the System/360. Monolithic memory On September 23, 1970, IBM announced the Model 145, a third model of the System/370, which was the first model to feature semiconductor main memory made from monolithic integrated circuits and was scheduled for delivery in the late summer of 1971. All subsequent S/370 models used such memory. Virtual storage In 1972, a very significant change was made when support for virtual storage was introduced with IBM's "System/370 Advanced Function" announcement. IBM had initially (and controversially) chosen to exclude virtual storage from the S/370 line. The August 2, 1972 announcement included: address relocation hardware on all S/370s except the original models 155 and 165 the new S/370 models 158 and 168, with address relocation hardware four new operating systems: DOS/VS (DOS with virtual storage), OS/VS1 (OS/360 MFT with virtual storage), OS/VS2 (OS/360 MVT with virtual storage) Release 1, termed SVS (Single Virtual Storage), and Release 2, termed MVS (Multiple Virtual Storage) and planned to be available 20 months later (at the end of March 1974), and VM/370 – the re-implemented CP/CMS Virtual storage had in fact been delivered on S/370 hardware before this announcement: In June 1971, on the S/370-145 (one of which had to be "smuggled" into Cambridge Scientific Center to prevent anybody noticing the arrival of an S/370 at that hotbed of virtual memory development – since this would have signaled that the S/370 was about to receive address relocation technology). The S/370-145 had an associative memory used by the microcode for the DOS compatibility feature from its first shipments in June 1971; the same hardware was used by the microcode for DAT. Although IBM famously chose to exclude virtual storage from the S/370 announcement, that decision was being reconsidered during the completion of the 145 engineering, partly because of virtual memory experience at CSC and elsewhere. The 145 microcode architecture simplified the addition of virtual storage, allowing this capability to be present in early 145s without the extensive hardware modifications needed in other models. However, IBM did not document the 145's virtual storage capability, nor annotate the relevant bits in the control registers and PSW that were displayed on the operator control panel when selected using the roller switches. The Reference and Change bits of the Storage-protection Keys, however, were labeled on the rollers, a dead giveaway to anyone who had worked with the earlier 360/67. Existing S/370-145 customers were happy to learn that they did not have to purchase a hardware upgrade in order to run DOS/VS or OS/VS1 (or OS/VS2 Release 1 – which was possible, but not common because of the limited amount of main storage available on the S/370-145). Shortly after the August 2, 1972 announcement, DAT box (address relocation hardware) upgrades for the S/370-155 and S/370-165 were quietly announced, but were available only for purchase by customers who already owned a Model 155 or 165. After installation, these models were known as the S/370-155-II and S/370-165-II. IBM wanted customers to upgrade their 155 and 165 systems to the widely sold S/370-158 and -168. These upgrades were surprisingly expensive ($200,000 and $400,000, respectively) and had long ship date lead times after being ordered by a customer; consequently, they were never popular with customers, the majority of whom leased their systems via a third-party leasing company. This led to the original S/370-155 and S/370-165 models being described as "boat anchors". The upgrade, required to run OS/VS1 or OS/VS2, was not cost effective for most customers by the time IBM could actually deliver and install it, so many customers were stuck with these machines running MVT until their lease ended. It was not unusual for this to be another four, five or even six years for the more unfortunate ones, and turned out to be a significant factor in the slow adoption of OS/VS2 MVS, not only by customers in general, but for many internal IBM sites as well. Subsequent enhancements Later architectural changes primarily involved expansions in memory (central storage) – both physical memory and virtual address space – to enable larger workloads and meet client demands for more storage. This was the inevitable trend as Moore's Law eroded the unit cost of memory. As with all IBM mainframe development, preserving backward compatibility was paramount. Operating system specific assist, Extended Control Program Support (ECPS). extended facility and extension features for OS/VS1, MVS and VM. Exploiting levels of these operating systems, e.g., MVS/System Extensions (MVS/SE), reduce path length for some frequent functions. The Dual Address Space (DAS) facility allows a privileged program to move data between two address spaces without the overhead of allocating a buffer in common storage, moving the data to the buffer, scheduling an SRB in the target address space, moving the data to their final destination and freeing the buffer. IBM introduced DAS in 1981 for the 3033, but later made it available for some 43xx, 3031 and 3032 processors. MVS/System Product (MVS/SP) Version 1 exploited DAS if it was available. In October 1981, the 3033 and 3081 processors added "extended real addressing", which allowed 26-bit addressing for physical storage (but still imposed a 24-bit limit for any individual address space). This capability appeared later on other systems, such as the 4381 and 3090. The System/370 Extended Architecture (S/370-XA), first available in early 1983 on the 3081 and 3083 processors, provided a number of major enhancements, including expansion of virtual address spaces from 24-bits to 31-bits, expansion of real addresses from 24 or 26 bits to 31 bits, and a complete redesign of the I/O architecture. In February 1988, IBM announced the Enterprise Systems Architecture/370 (ESA/370) for enhanced (E) 3090 and 4381 models. It added sixteen 32-bit access registers, more addressing modes, and various facilities for working with multiple address spaces simultaneously. On September 5, 1990, IBM announced the Enterprise Systems Architecture/390 (ESA/390), upward compatible with ESA/370. Dual address space In 1981, IBM added the dual-address-space facility to System/370. This allows a program to have two address spaces; Control Register 1 contains the segment table origin (STO) for the primary address space and CR7 contains the STO for the secondary address space. The processor can run in primary-space mode or secondary-space mode. When in primary-space mode, instructions and data are fetched from the primary address space. When in secondary-space mode, operands whose addresses defined to be logical are fetched from the secondary address space; it is unpredictable whether instructions will be fetched from the primary or secondary address space, so code must be mapped into both address spaces in the same address ranges in both address spaces. The program can switch between primary-space and secondary-space mode with the SET ADDRESS SPACE CONTROL instruction; there are also MOVE TO PRIMARY and MOVE TO SECONDARY instructions that copy a range of bytes from an address range in one address space to an address range in the other address space. Address spaces are identified by an address-space number (ASN). The ASN contains indices into a two-level table, structured similarly to a two-level page table, with entries containing a presence bit, various fields indicating permissions granted for access to the address space, the starting address and length of the segment table for the address space, and other information. The SET SECONDARY ASN instruction makes the address space identified by a given ASN value the current secondary address space. Extended real addressing The initial System/370 architecture has a 24-bit limit on physical addresses, limiting physical memory to 16 MB. Page table entries have 12 bits of page frame address with 4 KB pages and 13 bits of page frame address with 2 KB pages, so combining a 12-bit page frame address with a 12-bit offset within the page or a 13-bit page frame address with an 11-bit offset within the page produces a 24-bit physical address. The extended real addressing feature in System/370 raises this limit to 26 bits, increasing the physical memory limit to 64 MB. Two reserved bits in the page table entry for 4 KB pages were used to extend the page frame address. The extended real addressing is only available with address translation enabled and with 4 KB pages. Series and models Models sorted by date introduced (table) The following table summarizes the major S/370 series and models. The second column lists the principal architecture associated with each series. Many models implemented more than one architecture; thus, 308x processors initially shipped as S/370 architecture, but later offered XA; and many processors, such as the 4381, had microcode that allowed customer selection between S/370 or XA (later, ESA) operation. Note also the confusing term "System/370-compatible", which appeared in IBM source documents to describe certain products. Outside IBM, this term would more often describe systems from Amdahl Corporation, Hitachi, and others, that could run the same S/370 software. This choice of terminology by IBM may have been a deliberate attempt to ignore the existence of those plug compatible manufacturers (PCMs), because they competed aggressively against IBM hardware dominance. Models grouped by Model number (detailed) IBM used the name System/370 to announce the following eleven (three-digit) offerings: System/370 Model 115 The IBM System/370 Model 115 was announced March 13, 1973 as "an ideal System/370 entry system for users of IBM's System/3, 1130 computing system and System/360 Models 20, 22 and 25." It was delivered with "a minimum of two (of IBM's newly announced) directly attached IBM 3340 disk drives." Up to four 3340s could be attached. The CPU could be configured with 65,536 (64K) or 98,304 (96K) bytes of main memory. An optional 360/20 emulator was available. The 115 was withdrawn on March 9, 1981. System/370 Model 125 The IBM System/370 Model 125 was announced Oct 4, 1972. Two, three or four directly attached IBM 3333 disk storage units provided "up to 400 million bytes online." Main memory was either 98,304 (96K) or 131,072 (128K) bytes. The 125 was withdrawn on March 9, 1981. System/370 Model 135 The IBM System/370 Model 135 was announced Mar 8, 1971. Options for the 370/135 included a choice of four main memory sizes; IBM 1400 series (1401, 1440 and 1460) emulation was also offered. A "reading device located in the Model 135 console" allowed updates and adding features to the Model 135's microcode. The 135 was withdrawn on October 16, 1979. System/370 Model 138 The IBM System/370 Model 138 which was announced Jun 30, 1976 was offered with either 524,288 (512K) or 1,048,576 (1 MB) of memory. The latter was "double the maximum capacity of the Model 135," which "can be upgraded to the new computer's internal performance levels at customer locations." The 138 was withdrawn on November 1, 1983. System/370 Model 145 The IBM System/370 Model 145 was announced Sep 23, 1970, three months after the 155 and 165 models. It first shipped in June 1971. The first System/370 to use monolithic main memory, the Model 145 was offered in six memory sizes. A portion of the main memory, the "Reloadable Control Storage" (RCS) was loaded from a prewritten disk cartridge containing microcode to implement, for example, all needed instructions, I/O channels, and optional instructions to enable the system to emulate earlier IBM machines. The 145 was withdrawn on October 16, 1979. System/370 Model 148 The IBM System/370 Model 148 had the same announcement and withdrawal dates as the Model 138. As with the option to field-upgrade a 135, a 370/145 could be field-upgraded "at customer locations" to 148-level performance. The upgraded 135 and 145 systems were "designated the Models 135-3 and 145-3." System/370 Model 155 The IBM System/370 Model 155 and the Model 165 were announced Jun 30, 1970, the first of the 370s introduced. Neither had a DAT box; they were limited to running the same non-virtual-memory operating systems available for the System/360. The 155 first shipped in January 1971. The OS/DOS (DOS/360 programs under OS/360), 1401/1440/1460 and 1410/7010 and 7070/7074 compatibility features were included, and the supporting integrated emulator programs could operate concurrently with standard System/370 workloads. In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 155 II, which added a DAT box. Both the 155 and the 165 were withdrawn on December 23, 1977. System/370 Model 158 The IBM System/370 Model 158 and the 370/168 were announced Aug 2, 1972. It included dynamic address translation (DAT) hardware, a prerequisite for the new virtual memory operating systems (DOS/VS, OS/VS1, OS/VS2). A tightly coupled multiprocessor (MP) model was available, as was the ability to loosely couple this system to another 360 or 370 via an optional channel-to-channel adapter. The 158 and 168 were withdrawn on September 15, 1980. System/370 Model 165 The IBM System/370 Model 165 was described by IBM as "more powerful" compared to the "medium-scale" 370/155. It first shipped in April 1971. Compatibility features included emulation for 7070/7074, 7080, and 709/7090/7094/7094 II. Some have described the 360/85's use of microcoded vs hardwired as a bridge to the 370/165. In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 165 II which added a DAT box. The 165 was withdrawn on December 23, 1977. System/370 Model 168 The IBM System/370 Model 168 included "up to eight megabytes" of main memory, double the maximum of 4 megabytes on the 370/158. It included dynamic address translation (DAT) hardware, a pre-requisite for the new virtual memory operating systems. Although the 168 served as IBM's "flagship" system, a 1975 newsbrief said that IBM boosted the power of the 370/168 again "in the wake of the Amdahl challenge... only 10 months after it introduced the improved 168-3 processor." The 370/168 was not withdrawn until September 1980. System/370 Model 195 The IBM System/370 Model 195 was announced Jun 30, 1970 and, at that time, it was "IBM's most powerful computing system." Its introduction came about 14 months after the announcement of its direct predecessor, the 360/195. Both 195 machines were withdrawn Feb. 9, 1977. System/370-compatible Beginning in 1977, IBM began to introduce new systems, using the description "A compatible member of the System/370 family." IBM 303X The first of the initial high end machines, IBM's 3033, was announced March 25, 1977 and was delivered the following March, at which time a multiprocessor version of the 3033 was announced. IBM described it as "The Big One." IBM noted about the 3033, looking back, that "When it was rolled out on March 25, 1977, the 3033 eclipsed the internal operating speed of the company's previous flagship the System/370 Model 168-3 ..." The IBM 3031 and IBM 3032 were announced Oct. 7, 1977 and withdrawn Feb. 8, 1985. IBM 308X Three systems comprised the next series of high end machines, IBM's 308X systems: The 3081 (announced Nov 12, 1980) had 2 CPUs The 3083 (announced Mar 31, 1982) had 1 CPU The 3084 (announced Sep 3, 1982) had 4 CPUs Despite the numbering, the least powerful was the 3083, which could be field-upgraded to a 3081; the 3084 was the top of the line. These models introduced IBM's Extended Architecture's 31-bit address capability and a set of backward compatible MVS/Extended Architecture (MVS/XA) software replacing previous products and part of OS/VS2 R3.8: All three 308x systems were withdrawn on August 4, 1987. IBM 3090 The next series of high-end machines, the IBM 3090, began with models 200 and 400. They were announced Feb. 12, 1985, and were configured with two or four CPUs respectively. IBM subsequently announced models 120, 150, 180, 300, 500 and 600 with lower, intermediate and higher capacities; the first digit of the model number gives the number of central processors. Starting with the E models, and continuing with the J and S models, IBM offered Enterprise Systems Architecture/370 (ESA/370), Processor Resource/System Manager (PR/SM) and a set of backward compatible MVS/Enterprise System Architecture (MVS/ESA) software replacing previous products: IBM's offering of an optional vector facility (VF) extension for the 3090 came at a time when Vector processing/Array processing suggested names like Cray and Control Data Corporation (CDC). The 200 and 400 were withdrawn on May 5, 1989. IBM 4300 The first pair of IBM 4300 processors were Mid/Low end systems announced Jan 30, 1979 as "compact (and).. compatible with System/370." The 4331 was subsequently withdrawn on November 18, 1981, and the 4341 on February 11, 1986. Other models were the 4321, 4361 and 4381. The 4361 has "Programmable Power-Off -- enables the user to turn off the processor under program control"; "Unit power off" is (also) part of the 4381 feature list. IBM offered many Model Groups and models of the 4300 family, ranging from the entry level 4331 to the 4381, described as "one of the most powerful and versatile intermediate system processors ever produced by IBM." The 4381 Model Group 3 was dual-CPU. IBM 9370 This low-end system, announced October 7, 1986, was "designed to satisfy the computing requirements of IBM customers who value System/370 affinity" and "small enough and quiet enough to operate in an office environment." IBM also noted its sensitivity to "entry software prices, substantial reductions in support and training requirements, and modest power consumption and maintenance costs." Furthermore, it stated its awareness of the needs of small-to-medium size businesses to be able to respond, as "computing requirements grow," adding that "the IBM 9370 system can be easily expanded by adding additional features and racks to accommodate..." This came at a time when Digital Equipment Corporation (DEC) and its VAX systems were strong competitors in both hardware and software; the media of the day carried IBM's alleged "VAX Killer" phrase, albeit often skeptically. Clones In the 360 era, a number of manufacturers had already standardized upon the IBM/360 instruction set and, to a degree, 360 architecture. Notable computer makers included Univac with the UNIVAC 9000 series, RCA with the RCA Spectra 70 series, English Electric with the English Electric System 4, and the Soviet ES EVM. These computers were not perfectly compatible, nor (except for the Russian efforts) were they intended to be. That changed in the 1970s with the introduction of the IBM/370 and Gene Amdahl's launch of his own company. About the same time, Japanese giants began eyeing the lucrative mainframe market both at home and abroad. One Japanese consortium focused upon IBM and two others from the BUNCH (Burroughs/Univac/NCR/Control Data/Honeywell) group of IBM's competitors. The latter efforts were abandoned and eventually all Japanese efforts focused on the IBM mainframe lines. Some of the era's clones included: Architecture details IBM documentation numbers the bits from high order to low order; the most significant (leftmost) bit is designated as bit number 0. S/370 also refers to a computer system architecture specification, and is a direct and mostly backward compatible evolution of the System/360 architecture from which it retains most aspects. This specification does not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The architecture describes mandatory interfaces that must be available on all implementations and optional interfaces which may or may not be implemented. Some of the aspects of this architecture are: Big endian byte ordering One or more processors with: 16 32-bit General purpose registers 16 32-bit Control registers 4 64-bit Floating-point registers A 64-bit Program status word (PSW) which describes (among other things) Interrupt masks Privilege states A condition code A 24-bit instruction address Timing facilities (Time of day clock, interval timer, CPU timer and clock comparator) An interruption mechanism, maskable and unmaskable interruption classes and subclasses An instruction set. Each instruction is wholly described and also defines the conditions under which an exception is recognized in the form of program interruption. A memory (called storage) subsystem with: 8 bits per byte A special processor communication area starting at address 0 Key controlled protection 24-bit addressing Manual control operations that provide: A bootstrap process (a process called Initial Program Load or IPL) Operator-initiated interrupts Resetting the system Basic debugging facilities Manual display and modifications of the system's state (memory and processor) An Input/Output mechanismwhich does not describe the devices themselves Some of the optional features are: A Dynamic Address Translation (DAT) mechanism that can be used to implement a virtual memory system Floating point instructions IBM took great care to ensure that changes to the architecture would remain compatible for unprivileged (problem state) programs; some new interfaces did not break the initial interface contract for privileged (supervisor mode) programs. Some examples are ECPS:MVS A feature to enhance performance for the MVS/370 operating systems ECPS:VM A feature to enhance performance for the VM operating systems Other changes were compatible only for unprivileged programs, although the changes for privileged programs were of limited scope and well defined. Some examples are: ECPS:VSE A feature to enhance performance for the DOS/VSE operating system. S/370-XA A feature to provide a new I/O interface and to support 31-bit virtual and physical addressing Great care was taken in order to ensure that further modifications to the architecture would remain compatible, at least as far as non-privileged programs were concerned. This philosophy predates the definition of the S/370 architecture and started with the S/360 architecture. If certain rules are adhered to, a program written for this architecture will run with the intended results on the successors of this architecture. Such an example is that the S/370 architecture specifies that the 64-bit PSW register bit number 32 has to be set to 0 and that doing otherwise leads to an exception. Subsequently, when the S/370-XA architecture was defined, it was stated that this bit would indicate whether the program was a program expecting a 24-bit address architecture or 31-bit address architecture. Thus, most programs that ran on the 24-bit architecture can still run on 31-bit systems; the 64-bit z/Architecture has an additional mode bit for 64-bit addresses, so that those programs, and programs that ran on the 31-bit architecture, can still run on 64-bit systems. However, not all of the interfaces can remain compatible. Emphasis was put on having non control programs (called problem state programs) remain compatible. Thus, operating systems have to be ported to the new architecture because the control interfaces can (and were) redefined in an incompatible way. For example, the I/O interface was redesigned in S/370-XA making S/370 program issuing I/O operations unusable as-is. S/370 replacement IBM replaced the System/370 line with the System/390 in the 1990s, and similarly extended the architecture from ESA/370 to ESA/390. This was a minor architectural change, and was upwards compatible. In 2000, the System/390 was replaced with the zSeries (now called IBM Z). The zSeries mainframes introduced the 64-bit z/Architecture, the most significant design improvement since the 31-bit transition. All have retained essential backward compatibility with the original S/360 architecture and instruction set. GCC and Linux on the S/370 The GNU Compiler Collection (GCC) had a back end for S/370, but it became obsolete over time and was finally replaced with the S/390 backend. Although the S/370 and S/390 instruction sets are essentially the same (and have been consistent since the introduction of the S/360), GCC operability on older systems has been abandoned. GCC currently works on machines that have the full instruction set of System/390 Generation 5 (G5), the hardware platform for the initial release of Linux/390. However, a separately maintained version of GCC 3.2.3 that works for the S/370 is available, known as GCCMVS. I/O evolutions I/O evolution from original S/360 to S/370 The block multiplexer channel, previously available only on the 360/85 and 360/195, was a standard part of the architecture. For compatibility it could operate as a selector channel. Block multiplexer channels were available in single byte (1.5 MB/s) and double byte (3.0 MB/s) versions. I/O evolution since original S/370 As part of the DAT announcement, IBM upgraded channels to have Indirect Data Address Lists (IDALs). a form of I/O MMU. Data streaming channels had a speed of 3.0 MB/s over a single byte interface, later upgraded to 4.5 MB/s. Channel set switching allowed one processor in a multiprocessor configuration to take over the I/O workload from the other processor if it failed or was taken offline for maintenance. System/370-XA introduced a channel subsystem that performed I/O queuing previously done by the operating system. The System/390 introduced the ESCON channel, an optical fiber, half-duplex, serial channel with a maximum distance of 43 kilometers. Originally operating at 10 Mbyte/s, it was subsequently increased to 17 Mbyte/s. Subsequently, FICON became the standard IBM mainframe channel; FIbre CONnection (FICON) is the IBM proprietary name for the ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol used to map both IBM's antecedent (either ESCON or parallel Bus and Tag) channel-to-control-unit cabling infrastructure and protocol onto standard FC services and infrastructure at data rates up to 16 Gigabits/sec at distances up to 100 km. Fibre Channel Protocol (FCP) allows attaching SCSI devices using the same infrastructure as FICON. See also Hercules emulator IBM System/360 IBM System/370-XA IBM ESA/390 IBM System z PC-based IBM-compatible mainframes Notes References S370-1st S370 S370-MVS S370-VM S370-XA-1st S370-XA S370-ESA S/390-ESA SIE Further reading Chapter 4 (pp. 111166) describes the System/370 architecture; Chapter 5 (pp. 167206) describes the System/370 Extended Architecture. External links Hercules System/370 Emulator A software implementation of IBM System/370 370 Computing platforms Computer-related introductions in 1970 1990s disestablishments 32-bit computers
IBM System/370
[ "Technology" ]
6,879
[ "Computing platforms" ]
59,823
https://en.wikipedia.org/wiki/Misfit%20%28short%20story%29
"Misfit" is a science fiction short story by American writer Robert A. Heinlein. It was originally titled "Cosmic Construction Corps" before being renamed by the editor John W. Campbell and published in the November 1939 issue of Astounding Science Fiction. "Misfit" was Heinlein's second published story. One of the earliest of Heinlein's Future History stories, it was later included in the collections Revolt in 2100 and The Past Through Tomorrow. Plot summary A coming-of-age story that follows Andrew Jackson Libby, a boy from Earth with extraordinary mathematical ability but meager education. Finding few opportunities on Earth, he joins the Cosmic Construction Corps, a future military-led version of the US Depression-era Civilian Conservation Corps employing out-of-work youth to construct the infrastructure needed to colonize the Solar System. With a group of other inexperienced young men he is assigned to a ship traveling to the asteroid belt, where their task is to build a base on an asteroid and then move it into a more convenient orbit between Earth and Mars. Libby comes to the Captain's attention during the process of blasting holes in the asteroid for rocket engines when Libby realizes that a mistake has been made in calculating the size of the charge, preventing a catastrophic blast. He is assigned to the ship's astrogation computer. During the move to the destination orbit, the computer malfunctions, and Libby takes over, performing all the complex calculations in his head. The asteroid is settled successfully into its final orbit. "Slipstick" Libby became one of Heinlein's recurring characters and would later appear in several works associated with Lazarus Long, among them Methuselah's Children and The Cat Who Walks Through Walls. The story includes one of the earliest uses of the term "space marines". References External links "Misfit" on the Internet Archive 1939 short stories Fiction about main-belt asteroids Mathematics fiction books Short stories by Robert A. Heinlein Science fiction short stories Works originally published in Analog Science Fiction and Fact
Misfit (short story)
[ "Mathematics" ]
421
[ "Recreational mathematics", "Mathematics fiction books" ]
59,853
https://en.wikipedia.org/wiki/Retroreflector
A retroreflector (sometimes called a retroflector or cataphote) is a device or surface that reflects radiation (usually light) back to its source with minimum scattering. This works at a wide range of angle of incidence, unlike a planar mirror, which does this only if the mirror is exactly perpendicular to the wave front, having a zero angle of incidence. Being directed, the retroflector's reflection is brighter than that of a diffuse reflector. Corner reflectors and cat's eye reflectors are the most used kinds. Types There are several ways to obtain retroreflection: Corner reflector A set of three mutually perpendicular reflective surfaces, placed to form the internal corner of a cube, work as a retroreflector. The three corresponding normal vectors of the corner's sides form a basis in which to represent the direction of an arbitrary incoming ray, . When the ray reflects from the first side, say x, the ray's x-component, a, is reversed to −a, while the y- and z-components are unchanged. Therefore, as the ray reflects first from side x then side y and finally from side z the ray direction goes from to to to and it leaves the corner with all three components of its direction exactly reversed. Corner reflectors occur in two varieties. In the more common form, the corner is literally the truncated corner of a cube of transparent material such as conventional optical glass. In this structure, the reflection is achieved either by total internal reflection or silvering of the outer cube surfaces. The second form uses mutually perpendicular flat mirrors bracketing an air space. These two types have similar optical properties. A large relatively thin retroreflector can be formed by combining many small corner reflectors, using the standard hexagonal tiling. Cat's eye Another common type of retroreflector consists of refracting optical elements with a reflective surface, arranged so that the focal surface of the refractive element coincides with the reflective surface, typically a transparent sphere and (optionally) a spherical mirror. In the paraxial approximation, this effect can be achieved with lowest divergence with a single transparent sphere when the refractive index of the material is exactly one plus the refractive index ni of the medium from which the radiation is incident (ni is around 1 for air). In that case, the sphere surface behaves as a concave spherical mirror with the required curvature for retroreflection. In practice, the optimal index of refraction may be lower than due to several factors. For one, it is sometimes preferable to have an imperfect, slightly divergent retroreflection, as in the case of road signs, where the illumination and observation angles are different. Due to spherical aberration, there also exists a radius from the centerline at which incident rays are focused at the center of the rear surface of the sphere. Finally, high index materials have higher Fresnel reflection coefficients, so the efficiency of coupling of the light from the ambient into the sphere decreases as the index becomes higher. Commercial retroreflective beads thus vary in index from around 1.5 (common forms of glass) up to around 1.9 (commonly barium titanate glass). The spherical aberration problem with the spherical cat's eye can be solved in various ways, one being a spherically symmetrical index gradient within the sphere, such as in the Luneburg lens design. Practically, this can be approximated by a concentric sphere system. Because the back-side reflection for an uncoated sphere is imperfect, it is fairly common to add a metallic coating to the back half of retroreflective spheres to increase the reflectance, but this implies that the retroreflection only works when the sphere is oriented in a particular direction. An alternative form of the cat's eye retroreflector uses a normal lens focused onto a curved mirror rather than a transparent sphere, though this type is much more limited in the range of incident angles that it retroreflects. The term cat's eye derives from the resemblance of the cat's eye retroreflector to the optical system that produces the well-known phenomenon of "glowing eyes" or eyeshine in cats and other vertebrates (which are only reflecting light, rather than actually glowing). The combination of the eye's lens and the cornea form the refractive converging system, while the tapetum lucidum behind the retina forms the spherical concave mirror. Because the function of the eye is to form an image on the retina, an eye focused on a distant object has a focal surface that approximately follows the reflective tapetum lucidum structure, which is the condition required to form a good retroreflection. This type of retroreflector can consist of many small versions of these structures incorporated in a thin sheet or in paint. In the case of paint containing glass beads, the paint adheres the beads to the surface where retroreflection is required and the beads protrude, their diameter being about twice the thickness of the paint. Phase-conjugate mirror A third, much less common way of producing a retroreflector is to use the nonlinear optical phenomenon of phase conjugation. This technique is used in advanced optical systems such as high-power lasers and optical transmission lines. Phase-conjugate mirrors reflect an incoming wave so that the reflected wave exactly follows the path it has previously taken, and require a comparatively expensive and complex apparatus, as well as large quantities of power (as nonlinear optical processes can be efficient only at high enough intensities). However, phase-conjugate mirrors have an inherently much greater accuracy in the direction of the retroreflection, which in passive elements is limited by the mechanical accuracy of the construction. Operation Retroreflectors are devices that operate by returning light back to the light source along the same light direction. The coefficient of luminous intensity, RI, is the measure of a reflector performance, which is defined as the ratio of the strength of the reflected light (luminous intensity) to the amount of light that falls on the reflector (normal illuminance). A reflector appears brighter as its RI value increases. The RI value of the reflector is a function of the color, size, and condition of the reflector. Clear or white reflectors are the most efficient, and appear brighter than other colors. The surface area of the reflector is proportional to the RI value, which increases as the reflective surface increases. The RI value is also a function of the spatial geometry between the observer, light source, and reflector. Figures 1 and 2 show the observation angle and entrance angle between the automobile's headlights, bicycle, and driver. The observation angle is the angle formed by the light beam and the driver's line of sight. Observation angle is a function of the distance between the headlights and the driver's eye, and the distance to the reflector. Traffic engineers use an observation angle of 0.2 degrees to simulate a reflector target about 800 feet in front of a passenger automobile. As the observation angle increases, the reflector performance decreases. For example, a truck has a large separation between the headlight and the driver's eye compared to a passenger vehicle. A bicycle reflector appears brighter to the passenger car driver than to the truck driver at the same distance from the vehicle to the reflector. The light beam and the normal axis of the reflector as shown in Figure 2 form the entrance angle. The entrance angle is a function of the orientation of the reflector to the light source. For example, the entrance angle between an automobile approaching a bicycle at an intersection 90 degrees apart is larger than the entrance angle for a bicycle directly in front of an automobile on a straight road. The reflector appears brightest to the observer when it is directly in line with the light source. The brightness of a reflector is also a function of the distance between the light source and the reflector. At a given observation angle, as the distance between the light source and the reflector decreases, the light that falls on the reflector increases. This increases the amount of light returned to the observer and the reflector appears brighter. Applications On roads Retroreflection (sometimes called retroflection) is used on road surfaces, road signs, vehicles, and clothing (large parts of the surface of special safety clothing, less on regular coats). When the headlights of a car illuminate a retroreflective surface, the reflected light is directed towards the car and its driver (rather than in all directions as with diffuse reflection). However, a pedestrian can see retroreflective surfaces in the dark only if there is a light source directly between them and the reflector (e.g., via a flashlight they carry) or directly behind them (e.g., via a car approaching from behind). "Cat's eyes" are a particular type of retroreflector embedded in the road surface and are used mostly in the UK and parts of the United States. Corner reflectors are better at sending the light back to the source over long distances, while spheres are better at sending the light to a receiver somewhat off-axis from the source, as when the light from headlights is reflected into the driver's eyes. Retroreflectors can be embedded in the road (level with the road surface), or they can be raised above the road surface. Raised reflectors are visible for very long distances (typically 0.5–1 kilometer or more), while sunken reflectors are visible only at very close ranges due to the higher angle required to properly reflect the light. Raised reflectors are generally not used in areas that regularly experience snow during winter, as passing snowplows can tear them off the roadways. Stress on roadways caused by cars running over embedded objects also contributes to accelerated wear and pothole formation. Retroreflective road paint is thus very popular in Canada and parts of the United States, as it is not affected by the passage of snowplows and does not affect the interior of the roadway. Where weather permits, embedded or raised retroreflectors are preferred as they last much longer than road paint, which is weathered by the elements, can be obscured by sediment or rain, and is ground away by the passage of vehicles. For signs For traffic signs and vehicle operators, the light source is a vehicle's headlights, where the light is sent to the traffic sign face and then returned to the vehicle operator. Retroreflective traffic sign faces are manufactured with glass beads or prismatic reflectors embedded in a base sheeting layer so that the face reflects light, therefore making the sign appear more bright and visible to the vehicle operator under darkened conditions. According to the United States National Highway Traffic Safety Administration (NHTSA), the Traffic Safety Facts 2000 publication states the fatal crash rate is 3-4 times more likely during nighttime crashes than daytime incidents. A misconception many people have is that retroreflectivity is only important during night-time travel. However, in recent years, more states and agencies require that headlights be turned on in inclement weather such as rain and snow. According to the United States Federal Highway Administration (FHWA): Approximately 24% of all vehicle accidents occur during adverse weather (rain, sleet, snow and fog). Rain conditions account for 47% of weather-related accidents. These statistics are based on 14-year averages from 1995 to 2008. The FHWA's Manual on Uniform Traffic Control Devices requires that signs be either illuminated or made with retroreflective sheeting materials, and though most signs in the U.S. are made with retroreflective sheeting materials, they degrade over time. Until now, there has been little information available to determine how long the retroreflectivity lasts. The MUTCD now requires that agencies maintain traffic signs to a set of minimum levels but provide a variety of maintenance methods that agencies can use for compliance. The minimum retroreflectivity requirements do not imply that an agency must measure every sign. Rather, the new MUTCD language describes methods that agencies can use to maintain traffic sign retroreflectivity at or above the minimum levels. In Canada, aerodrome lighting can be replaced by appropriately colored retroreflectors, the most important of which are the white retroreflectors that delineate the runway edges, and must be seen by aircraft equipped with landing lights up to 2 nautical miles away. Ships, boats, emergency gear Retroflective tape is recognized and recommended by the International Convention for the Safety of Life at Sea (SOLAS) because of its high reflectivity of both light and radar signals. Application to life rafts, personal flotation devices, and other safety gear makes it easy to locate people and objects in the water at night. When applied to boat surfaces it creates a larger radar signature—particularly for fiberglass boats, which produce very little radar reflection on their own. It conforms to International Maritime Organization regulation, IMO Res. A.658 (16) and meets U.S. Coast Guard specification 46 CFR Part 164, Subpart 164.018/5/0. Examples of commercially available products are 3M part numbers 3150A and 6750I, and Orafol Oralite FD1403. Surveying In surveying, a retroreflector—usually referred to as a prism—is normally attached on a surveying pole and is used as a target for distance measurement, for example, a total station. The instrument operator or robot aims a laser beam at the retroreflector. The instrument measures the propagation time of the light and converts it to a distance. Prisms are used with survey and 3D point monitoring systems to measure changes in horizontal and vertical position of a point. Two prisms may also serve as targets for angle measurements, using total stations or simpler theodolites; this usage, reminiscent of the heliotrope, does not involve retroreflection per se, it only requires visibility by means of any source of illumination (such as the sun) for direct sighting to the center of the target prism as seen from the optical instrument. In space On the Moon Astronauts on the Apollo 11, 14, and 15 missions left retroreflectors on the Moon as part of the Lunar Laser Ranging Experiment. The Soviet Lunokhod 1 and Lunokhod 2 rovers also carried smaller arrays. Reflected signals were initially received from Lunokhod 1, but no return signals were detected from 1971 until 2010, at least in part due to some uncertainty in its location on the Moon. In 2010, it was found in Lunar Reconnaissance Orbiter photographs and the retroreflectors have been used again. Lunokhod 2's array continues to return signals to Earth. Even under good viewing conditions, only a single reflected photon is received every few seconds. This makes the job of filtering laser-generated photons from naturally occurring photons challenging. Vikram lander of Chandrayaan-3 left Laser Retroreflector Array (LRA) instrument supplied by NASA's Goddard Space Flight Center as part of international collaboration with ISRO. On 12 December 2023, Lunar Reconnaissance Orbiter was successfully able to detect transmitted laser pulses from Vikram lander. On Mars A similar device, the Laser Retroreflector Array (LaRA), has been incorporated in the Mars Perseverance rover. The retroreflector was designed by the National Institute for Nuclear Physics of Italy, which built the instrument on behalf of the Italian Space Agency. In satellites Many artificial satellites carry retroreflectors so they can be tracked from ground stations. Some satellites were built solely for laser ranging. LAGEOS, or Laser Geodynamics Satellites, are a series of scientific research satellites designed to provide an orbiting laser ranging benchmark for geodynamical studies of the Earth. There are two LAGEOS spacecraft: LAGEOS-1 (launched in 1976), and LAGEOS-2 (launched in 1992). They use cube-corner retroreflectors made of fused silica glass. As of 2020, both LAGEOS spacecraft are still in service. Three STARSHINE satellites equipped with retroreflectors were launched beginning in 1999. The LARES satellite was launched on February 13, 2012. (See also: List of laser ranging satellites.) Other satellites include retroreflectors for orbit calibration and orbit determination, such as in satellite navigation (e.g., all Galileo satellites, most GLONASS satellites, IRNSS satellites, BeiDou, QZSS, and two GPS satellites) as well as in satellite gravimetry (GOCE) satellite altimetry (e.g., TOPEX/Poseidon, Sentinel-3). Retroreflectors can also be used for inter-satellite laser ranging instead of ground-tracking (e.g., GRACE-FO). The BLITS (Ball Lens In The Space) spherical retroreflector satellite was placed into orbit as part of a September 2009 Soyuz launch by the Federal Space Agency of Russia with the assistance of the International Laser Ranging Service, an independent body originally organized by the International Association of Geodesy, the International Astronomical Union, and international committees. The ILRS central bureau is located at the United States' Goddard Space Flight Center. The reflector, a type of Luneburg lens, was developed and manufactured by the Institute for Precision Instrument Engineering (IPIE) in Moscow. The mission was interrupted in 2013 after a collision with space debris. Free-space optical communication Modulated retroreflectors, in which the reflectance is changed over time by some means, are the subject of research and development for free-space optical communications networks. The basic concept of such systems is that a low-power remote system, such as a sensor mote, can receive an optical signal from a base station and reflect the modulated signal back to the base station. Since the base station supplies the optical power, this allows the remote system to communicate without excessive power consumption. Modulated retroreflectors also exist in the form of modulated phase-conjugate mirrors (PCMs). In the latter case, a "time-reversed" wave is generated by the PCM with temporal encoding of the phase-conjugate wave (see, e.g., SciAm, Oct. 1990, "The Photorefractive Effect," David M. Pepper, et al.). Inexpensive corner-aiming retroreflectors are used in user-controlled technology as optical datalink devices. Aiming is done at night, and the necessary retroreflector area depends on aiming distance and ambient lighting from street lamps. The optical receiver itself behaves as a weak retroreflector because it contains a large, precisely focused lens that detects illuminated objects in its focal plane. This allows aiming without a retroreflector for short ranges. Other uses Retroreflectors are used in the following example applications: In common (non-SLR) digital cameras, the sensor system is often retroreflective. Researchers have used this property to demonstrate a system to prevent unauthorized photographs by detecting digital cameras and beaming a highly focused beam of light into the lens. In movie screens to allow for high brilliance under dark conditions. Digital compositing programs and chroma key environments use retroreflection to replace traditional lit backdrops in composite work as they provide a more solid color without requiring that the backdrop be lit separately. In Longpath-DOAS systems retroreflectors are used to reflect the light emitted from a lightsource back into a telescope. It is then spectrally analyzed to obtain information about the trace gas content of the air between the telescope and the retro reflector. Barcode labels can be printed on retroreflective material to increase the range of scanning up to 50 feet. In a form of 3D display; where a retro-reflective sheeting and a set of projectors is used to project stereoscopic images back to user's eye. The use of mobile projectors and positional tracking mounted on user's spectacles frame allows the illusion of a hologram to be created for computer generated imagery. Flashlight fish of the family Anomalopidae have natural retroreflectors. See tapetum lucidum. History Many prey and predator animals have naturally retroreflective eyes by having a reflective layer called the Tapetum lucidum behind the retina, since this doubles the light that their retina receives. Inspired by the natural world, the inventor of road 'cat's eyes' was Percy Shaw of Boothtown, Halifax, West Yorkshire, England. When the tram-lines were removed in the nearby suburb of Ambler Thorn, he realised that he had been using the polished steel rails to navigate at night. The name "cat's eye" comes from Shaw's inspiration for the device: the eyeshine reflecting from the eyes of a cat. In 1934, he patented his invention (patents Nos. 436,290 and 457,536), and on 15 March 1935, founded Reflecting Roadstuds Limited in Halifax to manufacture the items. The name Catseye is their trademark. The retroreflecting lens had been invented six years earlier for use in advertising signs by Richard Hollins Murray, an accountant from Herefordshire and, as Shaw acknowledged, they had contributed to his idea. See also Corner reflector Free-space optical communication GPS Block III satellite improvements Heiligenschein High-visibility clothing Optical square Opposition surge Modulating retro-reflector Reflective prisms Retroreflective sheeting and tape Safety reflector Notes References Optics Letters, Vol. 4, pp. 190–192 (1979), "Retroreflective Arrays as Approximate Phase Conjugators," by H.H. Barrett and S.F. Jacobs. Optical Engineering, Vol. 21, pp. 281–283 (March/April 1982), "Experiments with Retrodirective Arrays," by Stephen F. Jacobs. Scientific American, December 1985, "Phase Conjugation," by Vladimir Shkunov and Boris Zel'dovich. Scientific American, January 1986, "Applications of Optical Phase Conjugation," by David M. Pepper. Scientific American, April 1986, "The Amateur Scientist" ('Wonders with the Retroreflector'), by Jearl Walker. Scientific American, October 1990, "The Photorefractive Effect," by David M. Pepper, Jack Feinberg, and Nicolai V. Kukhtarev. External links Apollo 15 Laser Ranging Retroreflector Experiment Manual of Traffic Signs - Retroreflective Sheetings Used for Sign Faces Motorcycle retroreflective Sheeting Lunar retroflectors Howstuffworks article on retroreflector-based invisibility cloaks Reflective Traffic Sign Laws Optical components
Retroreflector
[ "Materials_science", "Technology", "Engineering" ]
4,711
[ "Glass engineering and science", "Optical components", "Components" ]
59,863
https://en.wikipedia.org/wiki/Correspondence%20principle
In physics, a correspondence principle is any one of several premises or assertions about the relationship between classical and quantum mechanics. The physicist Niels Bohr coined the term in 1920 during the early development of quantum theory; he used it to explain how quantized classical orbitals connect to quantum radiation. Modern sources often use the term for the idea that the behavior of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers: for large orbits and for large energies, quantum calculations must agree with classical calculations. A "generalized" correspondence principle refers to the requirement for a broad set of connections between any old and new theory. History Max Planck was the first to introduce the idea of quanta of energy, while studying black-body radiation in 1900. In 1906, he was also the first to write that quantum theory should replicate classical mechanics at some limit, particularly if the Planck constant h were taken to be infinitesimal. With this idea, he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law, the classical prediction (valid for large wavelength). Niels Bohr used a similar idea, while developing his model of the atom. In 1913, he provided the first postulates of what is now known as old quantum theory. Using these postulates he obtained that for the hydrogen atom, the energy spectrum approaches the classical continuum for large n (a quantum number that encodes the energy of the orbit). Bohr coined the term "correspondence principle" during a lecture in 1920. Arnold Sommerfeld refined Bohr's theory leading to the Bohr-Sommerfeld quantization condition. Sommerfeld referred to the correspondence principle as Bohr's magic wand (), in 1921. Bohr's correspondence principle The seeds of Bohr's correspondence principle appeared from two sources. First Sommerfeld and Max Born developed a "quantization procedure" based on the action angle variables of classical Hamiltonian mechanics. This gave a mathematical foundation for stationary states of the Bohr-Sommerfeld model of the atom. The second seed was Albert Einstein's quantum derivation of Planck's law in 1916. Einstein developed the statistical mechanics for Bohr-model atoms interacting with electromagnetic radiation, leading to absorption and two kinds of emission, spontaneous and stimulated emission. But for Bohr the important result was the use of classical analogies and the Bohr atomic model to fix inconsistencies in Planck's derivation of the blackbody radiation formula. Bohr used the word "correspondence" in italics in lectures and writing before calling it a correspondence principle. He viewed this as a correspondence between quantum motion and radiation, not between classical and quantum theories. He writes in 1920 that there exists "a far-reaching correspondence between the various types of possible transitions between the stationary states on the one hand and the various harmonic components of the motion on the other hand." Bohr's first article containing the definition of the correspondence principle was in 1923, in a summary paper entitled (in the English translation) "On the application of quantum theory to atomic structure". In his chapter II, "The process of radiation", he defines his correspondence principle as a condition connecting harmonic components of the electron moment to the possible occurrence of a radiative transition. In modern terms, this condition is a selection rule, saying that a given quantum jump is possible if and only if a particular type of motion exists in the corresponding classical model. Following his definition of the correspondence principle, Bohr describes two applications. First he shows that the frequency of emitted radiation is related to an integral which can be well approximated by a sum when the quantum numbers inside the integral are large compared with their differences. Similarly he shows a relationship for the intensities of spectral lines and thus the rates at which quantum jumps occur. These asymptotic relationships are expressed by Bohr as consequences of his general correspondence principle. However, historically each of these applications have been called "the correspondence principle". The PhD dissertation of Hans Kramers working in Bohr's group in Copenhagen applied Bohr's correspondence principle to account for all of the known facts of the spectroscopic Stark effect, including some spectral components not known at the time of Kramers work. Sommerfeld had been skeptical of the correspondence principle as it did not seem to be a consequence of a fundamental theory; Kramers' work convinced him that the principle had heuristic utility nevertheless. Other physicists picked up the concept, including work by John Van Vleck and by Kramers and Heisenberg on dispersion theory. The principle became a cornerstone of the semi-classical Bohr-Sommerfeld atomic theory; Bohr's 1922 Nobel prize was partly awarded for his work with the correspondence principle. Despite the successes, the physical theories based on the principle faced increasing challenges the early 1920s. Theoretical calculations by Van Vleck and by Kramers of the ionization potential of Helium disagreed significantly with experimental values. Bohr, Kramers, and John C. Slater responded with a new theoretical approach now called the BKS theory based on the correspondence principle but disavowing conservation of energy. Einstein and Wolfgang Pauli criticized the new approach, and the Bothe–Geiger coincidence experiment showed that energy was conserved in quantum collisions. With the existing theories in conflict with observations, two new quantum mechanics concepts arose. First, Heisenberg's 1925 Umdeutung paper on matrix mechanics was inspired by the correspondence principle, although he did not cite Bohr. Further development in collaboration with Pascual Jordan and Max Born resulted in a mathematical model without connection to the principle. Second, Schrodinger's wave mechanics in the following year similarly did not use the principle. Both pictures were later shown to be equivalent and accurate enough to replace old quantum theory. These approaches have no atomic orbits: the correspondence is more of an analogy than a principle. Dirac's correspondence Paul Dirac developed significant portions of the new quantum theory in the second half of the 1920s. While he did not apply Bohr's correspondence principle, he developed a different, more formal classical–quantum correspondence. Dirac connected the structures of classical mechanics known as Poisson brackets to analogous structures of quantum mechanics known as commutators: By this correspondence, now called canonical quantization, Dirac showed how the mathematical form of classical mechanics could be recast as a basis for the new mathematics of quantum mechanics. Dirac developed these connections by studying the work of Heisenberg and Kramers on dispersion, work that was directly built on Bohr's correspondence principle; the Dirac approach provides a mathematically sound path towards Bohr's goal of a connection between classical and quantum mechanics. While Dirac did not call this correspondence a "principle", physics textbooks refer to his connections a "correspondence principle". The classical limit of wave mechanics The outstanding success of classical mechanics in the description of natural phenomena up to the 20th century means that quantum mechanics must do as well in similar circumstances. One way to quantitatively define this concept is to require quantum mechanical theories to produce classical mechanics results as the quantum of action goes to zero, . This transition can be accomplished in two different ways. First, the particle can be approximated by a wave packet, and the indefinite spread of the packet with time can be ignored. In 1927, Paul Ehrenfest proved his namesake theorem that showed that Newton's laws of motion hold on average in quantum mechanics: the quantum statistical expectation value of the position and momentum obey Newton's laws. Second, the individual particle view can be replaced with a statistical mixture of classical particles with a density matching the quantum probability density. This approach led to the concept of semiclassical physics, beginning with the development of WKB approximation used in descriptions of quantum tunneling for example. Modern view While Bohr viewed "correspondence" as principle aiding his description of quantum phenomena, fundamental differences between the mathematical structure of quantum and of classical mechanics prevents correspondence in many cases. Rather than a principle, "there may be in some situations an approximate correspondence between classical and quantum concepts," physicist Asher Peres put it. Since quantum mechanics operates in a discrete space and classical mechanics in a continuous one, any correspondence will be necessarily fuzzy and elusive. Introductory quantum mechanics textbooks suggest that quantum mechanics goes over to classical theory in the limit of high quantum numbers or in a limit where the Planck constant in the quantum formula is reduced to zero, . However such correspondence is not always possible. For example, classical systems can exhibit chaotic orbits which diverge but quantum states are unitary and maintain a fixed overlap. Generalized correspondence principle The term "generalized correspondence principle" has been used in the study of the history of science to mean the reduction of a new scientific theory to an earlier scientific theory in appropriate circumstances. This requires that the new theory explain all the phenomena under circumstances for which the preceding theory was known to be valid; it also means that new theory will retain large parts of the older theory. The generalized principle applies correspondence across aspects of a complete theory, not just a single formula as in the classical limit correspondence. For example, Albert Einstein in his 1905 work on relativity noted that classical mechanics relied on Galilean relativity while electromagnetism did not, and yet both work well. He produced a new theory that combined them in a away that reduced to these separate theories in approximations. Ironically the singular failure of this "generalized correspondence principle" concept of scientific theories is the replacement of classical mechanics with quantum mechanics. See also Quantum decoherence Classical limit Classical probability density Leggett–Garg inequality References Quantum mechanics Theory of relativity Philosophy of physics Principles Metatheory
Correspondence principle
[ "Physics" ]
1,970
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Theoretical physics", "Quantum mechanics", "Theory of relativity" ]
59,874
https://en.wikipedia.org/wiki/Schr%C3%B6dinger%20equation
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933. Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations. The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics". The Klein-Gordon equation is a wave equation which is the relativistic version of the Schrödinger equation. The Schrödinger equation is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space & time are not on equal footing. Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. The second-derivative PDE of the Klein-Gordon equation led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square-root of the Klein-Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein-Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles. Definition Preliminaries Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension: Here, is a wave function, a function that assigns a complex number to each point at each time . The parameter is the mass of the particle, and is the potential that represents the environment in which the particle exists. The constant is the imaginary unit, and is the reduced Planck constant, which has units of action (energy multiplied by time). Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector belonging to a separable complex Hilbert space . This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys . The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions , while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace. A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states. Thus, a position-space wave function as used above can be written as the inner product of a time-dependent state vector with unphysical but convenient "position eigenstates" : Time-dependent equation The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time: where is time, is the state vector of the quantum system ( being the Greek letter psi), and is an observable, the Hamiltonian operator. The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory). To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function. For example, given a wave function in position space as above, we have Time-independent equation The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation. where is the energy of the system. This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) . Properties Linearity The Schrödinger equation is a linear differential equation, meaning that if two state vectors and are solutions, then so is any linear combination of the two state vectors where and are any complex numbers. Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector can be written as the linear combination where are complex numbers and the vectors are solutions of the time-independent equation . Unitarity Holding the Hamiltonian constant, the Schrödinger equation has the solution The operator is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is , then the state at a later time will be given by for some unitary operator . Conversely, suppose that is a continuous family of unitary operators parameterized by . Without loss of generality, the parameterization can be chosen so that is the identity operator and that for any . Then depends upon the parameter in such a way that for some self-adjoint operator , called the generator of the family . A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units). To see that the generator is Hermitian, note that with , we have so is unitary only if, to first order, its derivative is Hermitian. Changes of basis The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle. The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term: Writing for a three-dimensional position vector and for a three-dimensional momentum vector, the position-space Schrödinger equation is The momentum-space counterpart involves the Fourier transforms of the wave function and the potential: The functions and are derived from by where and do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space. When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables and are promoted to self-adjoint operators and that satisfy the canonical commutation relation This implies that so the action of the momentum operator in the position-space representation is . Thus, becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian . The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform. In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples with for only discrete reciprocal lattice vectors . This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone. Probability current The Schrödinger equation is consistent with local probability conservation. It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent. The continuity equation for probability in non relativistic quantum mechanics is stated as: where is the probability current or probability flux (flow per unit area). If the wavefunction is represented as where is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as:Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle. Separation of variables If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads: The operator on the left side depends only on time; the one on the right side depends only on space. Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts where is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and is a function of time only. Substituting this expression for into the time dependent left hand side shows that is a phase factor: A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule. The spatial part of the full wave function solves: where the energy appears in the phase factor. This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix. Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated, or radial and angular coordinates might be separated: Examples Particle in a box The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are or, from Euler's formula, The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at , and . At , in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of , This constraint on implies a constraint on the energy levels, yielding A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Harmonic oscillator The Schrödinger equation for this situation is where is the displacement and the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics. The solutions in position space are where , and the functions are the Hermite polynomials of order . The solution set may be generated by The eigenvalues are The case is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian. The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized. Hydrogen atom The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is where is the electron charge, is the position of the electron relative to the nucleus, is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein is the permittivity of free space and is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass and the electron of mass . The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass. The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus, where are radial functions and are spherical harmonics of degree and order . This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are: where is the Bohr radius, are the generalized Laguerre polynomials of degree , are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values Approximate solutions It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory. Semiclassical limit One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential , the Ehrenfest theorem says Although the first of these equations is consistent with the classical behavior, the second is not: If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have to be which is typically not the same as . For a general , therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories. For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position. The Schrödinger equation in its general form is closely related to the Hamilton–Jacobi equation (HJE) where is the classical action and is the Hamiltonian function (not operator). Here the generalized coordinates for (used in the context of the HJE) can be set to the position in Cartesian coordinates as . Substituting where is the probability density, into the Schrödinger equation and then taking the limit in the resulting equation yield the Hamilton–Jacobi equation. Density matrices Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead. A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written The density-matrix analogue of the Schrödinger equation for wave functions is where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices. If the Hamiltonian is time-independent, this equation can be easily solved to yield More generally, if the unitary operator describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by Unitary evolution of a density matrix conserves its von Neumann entropy. Relativistic quantum physics and quantum field theory The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method. Klein–Gordon and Dirac equations Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation, was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices . Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass and electric charge in an electromagnetic field (described by the electromagnetic potentials and ) is: in which the and are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all particles, and the solutions to the equation are spinor fields with two components corresponding to the particle and the other two for the antiparticle. For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass). In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin , are complex-valued spinor fields. Fock space As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways. History Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum of a photon is inversely proportional to its wavelength , or proportional to its wave number : where is the Planck constant and is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed. These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum according to According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit: This approach essentially confined the electron wave in one dimension, along a circular orbit of radius . In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen. Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action. The equation he found is By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units): He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925. While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926. Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave , moving in a potential well , created by the proton. This computation accurately reproduced the energy levels of the Bohr model. The Schrödinger equation details the behavior of but says nothing of its nature. Schrödinger tried to interpret the real part of as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted as the probability amplitude, whose modulus squared is equal to probability density. Later, Schrödinger himself explained this interpretation as follows: Interpretation The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts. In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort. Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful. Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation. See also Eckhaus equation Fokker–Planck equation Interpretations of quantum mechanics List of things named after Erwin Schrödinger Logarithmic Schrödinger equation Nonlinear Schrödinger equation Pauli equation Quantum channel Relation between Schrödinger's equation and the path integral formulation of quantum mechanics Schrödinger picture Wigner quasiprobability distribution Notes References External links Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware The Modern Revolution in Physics – an online textbook. Quantum Physics I at MIT OpenCourseWare Partial differential equations Wave mechanics Functions of space and time
Schrödinger equation
[ "Physics" ]
7,091
[ "Physical phenomena", "Equations of physics", "Functions of space and time", "Eponymous equations of physics", "Classical mechanics", "Quantum mechanics", "Waves", "Wave mechanics", "Schrödinger equation", "Spacetime" ]
59,877
https://en.wikipedia.org/wiki/Gas%20constant
The molar gas constant (also known as the gas constant, universal gas constant, or ideal gas constant) is denoted by the symbol or . It is the molar equivalent to the Boltzmann constant, expressed in units of energy per temperature increment per amount of substance, rather than energy per temperature increment per particle. The constant is also a combination of the constants from Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. It is a physical constant that is featured in many fundamental equations in the physical sciences, such as the ideal gas law, the Arrhenius equation, and the Nernst equation. The gas constant is the constant of proportionality that relates the energy scale in physics to the temperature scale and the scale used for amount of substance. Thus, the value of the gas constant ultimately derives from historical decisions and accidents in the setting of units of energy, temperature and amount of substance. The Boltzmann constant and the Avogadro constant were similarly determined, which separately relate energy to temperature and particle count to amount of substance. The gas constant R is defined as the Avogadro constant NA multiplied by the Boltzmann constant k (or kB): = × = Since the 2019 revision of the SI, both NA and k are defined with exact numerical values when expressed in SI units. As a consequence, the SI value of the molar gas constant is exact. Some have suggested that it might be appropriate to name the symbol R the Regnault constant in honour of the French chemist Henri Victor Regnault, whose accurate experimental data were used to calculate the early value of the constant. However, the origin of the letter R to represent the constant is elusive. The universal gas constant was apparently introduced independently by Clausius' student, A.F. Horstmann (1873) and Dmitri Mendeleev who reported it first on 12 September 1874. Using his extensive measurements of the properties of gases, Mendeleev also calculated it with high precision, within 0.3% of its modern value. The gas constant occurs in the ideal gas law: where P is the absolute pressure, V is the volume of gas, n is the amount of substance, m is the mass, and T is the thermodynamic temperature. Rspecific is the mass-specific gas constant. The gas constant is expressed in the same unit as molar heat. Dimensions From the ideal gas law PV = nRT we get: where P is pressure, V is volume, n is number of moles of a given substance, and T is temperature. As pressure is defined as force per area of measurement, the gas equation can also be written as: Area and volume are (length)2 and (length)3 respectively. Therefore: Since force × length = work: The physical significance of R is work per mole per degree. It may be expressed in any set of units representing work or energy (such as joules), units representing degrees of temperature on an absolute scale (such as kelvin or rankine), and any system of units designating a mole or a similar pure number that allows an equation of macroscopic mass and fundamental particle numbers in a system, such as an ideal gas (see Avogadro constant). Instead of a mole the constant can be expressed by considering the normal cubic metre. Otherwise, we can also say that: Therefore, we can write R as: And so, in terms of SI base units: R = . Relationship with the Boltzmann constant The Boltzmann constant kB (alternatively k) may be used in place of the molar gas constant by working in pure particle count, N, rather than amount of substance, n, since: where NA is the Avogadro constant. For example, the ideal gas law in terms of the Boltzmann constant is: where N is the number of particles (molecules in this case), or to generalize to an inhomogeneous system the local form holds: where ρN = N/V is the number density. Measurement and replacement with defined value As of 2006, the most precise measurement of R had been obtained by measuring the speed of sound ca(P, T) in argon at the temperature T of the triple point of water at different pressures P, and extrapolating to the zero-pressure limit ca(0, T). The value of R is then obtained from the relation: where: γ0 is the heat capacity ratio ( for monatomic gases such as argon); T is the temperature, TTPW = 273.16 K by the definition of the kelvin at that time; Ar(Ar) is the relative atomic mass of argon and Mu =  as defined at the time. However, following the 2019 revision of the SI, R now has an exact value defined in terms of other exactly defined physical constants. Specific gas constant The specific gas constant of a gas or a mixture of gases (Rspecific) is given by the molar gas constant divided by the molar mass (M) of the gas or mixture: Just as the molar gas constant can be related to the Boltzmann constant, so can the specific gas constant by dividing the Boltzmann constant by the molecular mass of the gas: Another important relationship comes from thermodynamics. Mayer's relation relates the specific gas constant to the specific heat capacities for a calorically perfect gas and a thermally perfect gas: where cp is the specific heat capacity for a constant pressure and cv is the specific heat capacity for a constant volume. It is common, especially in engineering applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as to distinguish it. In any case, the context and/or unit of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to. In case of air, using the perfect gas law and the standard sea-level conditions (SSL) (air density ρ0 = 1.225 kg/m3, temperature T0 = 288.15 K and pressure p0 = ), we have that Rair = P0/(ρ0T0) = . Then the molar mass of air is computed by M0 = R/Rair = . U.S. Standard Atmosphere The U.S. Standard Atmosphere, 1976 (USSA1976) defines the gas constant R∗ as: R∗ = = . Note the use of the kilomole, with the resulting factor of in the constant. The USSA1976 acknowledges that this value is not consistent with the cited values for the Avogadro constant and the Boltzmann constant. This disparity is not a significant departure from accuracy, and USSA1976 uses this value of R∗ for all the calculations of the standard atmosphere. When using the ISO value of R, the calculated pressure increases by only 0.62 pascal at 11 kilometres (the equivalent of a difference of only 17.4 centimetres or 6.8 inches) and 0.292 Pa at 20 km (the equivalent of a difference of only 33.8 cm or 13.2 in). Also note that this was well before the 2019 SI redefinition, through which the constant was given an exact value. References External links Ideal gas calculator – Ideal gas calculator provides the correct information for the moles of gas involved. Individual Gas Constants and the Universal Gas Constant – Engineering Toolbox Ideal gas Physical constants Amount of substance Statistical mechanics Thermodynamics
Gas constant
[ "Physics", "Chemistry", "Mathematics" ]
1,571
[ "Thermodynamic systems", "Scalar physical quantities", "Physical quantities", "Wikipedia categories named after physical quantities", "Quantity", "Intensive quantities", "Chemical quantities", "Amount of substance", "Physical systems", "Physical constants", "Thermodynamics", "Statistical mecha...
59,881
https://en.wikipedia.org/wiki/Ideal%20gas%20law
The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form: where , and are the pressure, volume and temperature respectively; is the amount of substance; and is the ideal gas constant. It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857. Equation The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin. Common forms The most frequently introduced forms are:where: is the absolute pressure of the gas, is the volume of the gas, is the amount of substance of gas (also known as number of moles), is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, is the Boltzmann constant, is the Avogadro constant, is the absolute temperature of the gas, is the number of particles (usually atoms or molecules) of the gas. In SI units, p is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0 K = −273.15 °C, the lowest possible temperature). R has for value 8.314 J/(mol·K) = 1.989 ≈ 2 cal/(mol·K), or 0.0821 L⋅atm/(mol⋅K). Molar form How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount, n (in moles), is equal to total mass of the gas (m) (in kilograms) divided by the molar mass, M (in kilograms per mole): By replacing n with m/M and subsequently introducing density ρ = m/V, we get: Defining the specific gas constant Rspecific as the ratio R/M, This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as or to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being used. Statistical mechanics In statistical mechanics, the following molecular equation is derived from first principles where is the absolute pressure of the gas, is the number density of the molecules (given by the ratio , in contrast to the previous formulation in which is the number of moles), is the absolute temperature, and is the Boltzmann constant relating temperature and energy, given by: where is the Avogadro constant. From this we notice that for a gas of mass , with an average particle mass of times the atomic mass constant, , (i.e., the mass is  Da) the number of molecules will be given by and since , we find that the ideal gas law can be rewritten as In SI units, is measured in pascals, in cubic metres, in kelvins, and in SI units. Combined gas law Combining the laws of Charles, Boyle and Gay-Lussac gives the combined gas law, which takes the same functional form as the ideal gas law says that the number of moles is unspecified, and the ratio of to is simply taken as a constant: where is the pressure of the gas, is the volume of the gas, is the absolute temperature of the gas, and is a constant. When comparing the same substance under two different sets of conditions, the law can be written as Energy associated with a gas According to the assumptions of the kinetic theory of ideal gases, one can consider that there are no intermolecular attractions between the molecules, or atoms, of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is the kinetic energy of the molecules, or atoms, of the gas. This corresponds to the kinetic energy of n moles of a monoatomic gas having 3 degrees of freedom; x, y, z. The table here below gives this relationship for different amounts of a monoatomic gas. Applications to thermodynamic processes The table below essentially simplifies the ideal gas equation for a particular process, making the equation easier to solve using numerical methods. A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by a subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, S, or H) is constant throughout the process. For a given thermodynamic process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation). In the final three columns, the properties (p, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed. a. In an isentropic process, system entropy (S) is constant. Under these conditions, p1V1γ = p2V2γ, where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature. b. In an isenthalpic process, system enthalpy (H) is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect. For reference, the Joule–Thomson coefficient μJT for air at room temperature and sea level is 0.22 °C/bar. Deviations from ideal behavior of real gases The equation of state given here (PV = nRT) applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces. Derivations Empirical The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant. All the possible gas laws that could have been discovered with this kind of setup are: Boyle's law () Charles's law () Avogadro's law () Gay-Lussac's law () where P stands for pressure, V for volume, N for number of particles in the gas and T for temperature; where are constants in this context because of each equation requiring only the parameters explicitly noted in them changing. To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4. Since each formula only holds when only the state variables involved in said formula change while the others (which are a property of the gas but are not explicitly noted in said formula) remain constant, we cannot simply use algebra and directly combine them all. This is why: Boyle did his experiments while keeping N and T constant and this must be taken into account (in this same way, every experiment kept some parameter as constant and this must be taken into account for the derivation). Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time (as it was done in the experiments). The derivation using 4 formulas can look like this: at first the gas has parameters Say, starting to change only pressure and volume, according to Boyle's law (), then: After this process, the gas has parameters Using then equation () to change the number of particles in the gas and the temperature, After this process, the gas has parameters Using then equation () to change the pressure and the number of particles, After this process, the gas has parameters Using then Charles's law (equation 2) to change the volume and temperature of the gas, After this process, the gas has parameters Using simple algebra on equations (), (), () and () yields the result: or where stands for the Boltzmann constant. Another equivalent result, using the fact that , where n is the number of moles in the gas and R is the universal gas constant, is: which is known as the ideal gas law. If three of the six equations are known, it may be possible to derive the remaining three using the same method. However, because each formula has two variables, this is possible only for certain groups of three. For example, if you were to have equations (), () and () you would not be able to get any more because combining any two of them will only give you the third. However, if you had equations (), () and () you would be able to get all six equations because combining () and () will yield (), then () and () will yield (), then () and () will yield (), as well as would the combination of () and () as is explained in the following visual relation: where the numbers represent the gas laws numbered above. If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third. For example: Change only pressure and volume first: then only volume and temperature: then as we can choose any value for , if we set , equation () becomes: combining equations () and () yields , which is equation (), of which we had no prior knowledge until this derivation. Theoretical Kinetic theory The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved. First we show that the fundamental assumptions of the kinetic theory of gases imply that Consider a container in the Cartesian coordinate system. For simplicity, we assume that a third of the molecules moves parallel to the -axis, a third moves parallel to the -axis and a third moves parallel to the -axis. If all molecules move with the same velocity , denote the corresponding pressure by . We choose an area on a wall of the container, perpendicular to the -axis. When time elapses, all molecules in the volume moving in the positive direction of the -axis will hit the area. There are molecules in a part of volume of the container, but only one sixth (i.e. a half of a third) of them moves in the positive direction of the -axis. Therefore, the number of molecules that will hit the area when the time elapses is . When a molecule bounces off the wall of the container, it changes its momentum to . Hence the magnitude of change of the momentum of one molecule is . The magnitude of the change of momentum of all molecules that bounce off the area when time elapses is then . From and we get We considered a situation where all molecules move with the same velocity . Now we consider a situation where they can move with different velocities, so we apply an "averaging transformation" to the above equation, effectively replacing by a new pressure and by the arithmetic mean of all squares of all velocities of the molecules, i.e. by Therefore which gives the desired formula. Using the Maxwell–Boltzmann distribution, the fraction of molecules that have a speed in the range to is , where and denotes the Boltzmann constant. The root-mean-square speed can be calculated by Using the integration formula it follows that from which we get the ideal gas law: Statistical mechanics Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then (two times) the time-averaged kinetic energy of the particle is: where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of N particles yields By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is the divergence theorem implies that where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields which immediately implies the ideal gas law for N particles: where n = N/NA is the number of moles of gas and R = NAkB is the gas constant. Other dimensions For a d-dimensional system, the ideal gas pressure is: where is the volume of the d-dimensional domain in which the gas exists. The dimensions of the pressure changes with dimensionality. See also Gas laws References Further reading External links Configuration integral (statistical mechanics) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. Gas equations in detail Gas laws Ideal gas Equations of state 1834 introductions
Ideal gas law
[ "Physics", "Chemistry" ]
3,343
[ "Thermodynamic systems", "Equations of physics", "Physical systems", "Gas laws", "Statistical mechanics", "Equations of state", "Ideal gas" ]
59,886
https://en.wikipedia.org/wiki/Blast%20beat
A blast beat is a type of drum beat that originated in hardcore punk and grindcore, and is often associated with certain styles of extreme metal, namely black metal and death metal, and occasionally in metalcore. In Adam MacGregor's definition, "the blast-beat generally comprises a repeated, sixteenth-note figure played at a very fast tempo, and divided uniformly among the bass drum, snare, and ride, crash, or hi-hat cymbal." Blast beats have been described by PopMatters contributor Whitney Strub as, "maniacal percussive explosions, less about rhythm per se than sheer sonic violence". Napalm Death is said to have coined the term, though this style of drumming had previously been used by others for its characteristically chaotic sound. History Antecedents in jazz and rock Although most commonly associated with hardcore punk and extreme metal, the earliest forms of what would later become the blast beat are noted to have appeared in jazz music. A commonly cited early example that somewhat resembles the modern technique is a brief section of Sam Woodyard's drum solo during a 1962 rendition of "Kinda Dukish" with the Duke Ellington orchestra. A clip of the performance under the title "The first blast beat in the world" garnered almost one million views on YouTube. Woodyard's example, however, lacks the modern inclusion of kick drum and cymbal work into the beat. Another early instance can be heard in Sunny Murray's 1966 or '67 performance on a live recording "Holy Ghost" with saxophonist Albert Ayler, although this did not receive an official release until the 1998 reissue of Albert Ayler in Greenwich Village. Prior to these two examples resurfacing and receiving the attention in the 2010s, AllMusic contributor Thom Jurek credited Tony Williams as the "true inventor of the blastbeat" for his frenetic performance on "Dark Prince" for Trio of Doom in 1979, officially released only in 2007. Some early antecedents of blast beats have also been identified in rock music. An early example of a proto-blast beat can be found in the Tielman Brothers' 1959 single, "Rock Little Baby of Mine" during the instrumental break. Drummer Steve Ross of the band Coven also plays an "attempt" at a blast beat in the track "Dignitaries of Hell" off the group's 1969 album, Witchcraft Destroys Minds & Reaps Souls. Four early examples of blast beats were performed in 1970: King Crimson's "The Devil's Triangle" off their sophomore release In the Wake of Poseidon includes proto-blastbeats in the later half of the song; Mike Fouracre of Marsupilami performs many blast beats throughout their self-titled album, most notably on "And the Eagle Chased the Dove to Its Ruin"; Emerson, Lake & Palmer's track "The Barbarian" contains a very brief blast beat in the outro; Bill Ward, drummer of pioneering heavy metal band Black Sabbath, played a few blast beats on a live performance of their song "War Pigs" (e.g. at timestamps 3:52 and 6:38). Modern hardcore and metal blast beats The blast beat as it is known today originated in the hardcore punk and grindcore scenes of the 1980s. Contrary to popular belief, blast beats originated from punk and hardcore music, not metal music. In the UK punk and hardcore scene of the early 1980s there were many bands attempting to play as fast as possible. English band Napalm Death coined the term "blast beat", although this style of drumming had previously been practiced by others. Daniel Ekeroth argues that the [hardcore] blast beat was first performed by the Swedish group Asocial on their 1982 demo. D.R.I. (1983, "No Sense"), Beastie Boys (1982, track 5, "Riot Fight"), Sepultura (1985, track 11, "Antichrist"), S.O.D. (1985, track 11, "Milk"), Sarcófago (1986, track 10, "Satanas"), and Repulsion also included the technique prior to Napalm Death's emergence. Rockdetector contributor Garry Sharpe-Young credits D.R.I.'s Eric Brecht as the first on their 1983 debut but credits Napalm Death with making it better known. In 1985, Napalm Death, then an emerging grindcore band, replaced their former drummer Miles "Rat" Ratledge with Mick Harris, who brought to the band a whole new level of speed. Harris is credited with developing the term "blast beat", describing the fast notes played on the kick and snare. Harris started using the blast beat as a fundamental aspect of Napalm Death's early musical compositions. It was finally with Napalm Death's first full-length album Scum (1987) that blast beat started to evolve into a distinct musical expression of its own. Blast beats became popular in extreme music from the mid to late 1980s . The blast beat evolved into its modern form as it was developed in the American death metal and grindcore scene of the late 1980s and early 1990s. Pete Sandoval, drummer of Terrorizer (1986–1989) and later Morbid Angel (1984–2013), purportedly was the first to use blast beats in metronomic time (and not as arhythmic or non-metric white noise) and thus gave it a more useful musical characteristic for timekeeping. Blast beats eventually appeared in commercially successful metal music, beginning with Fear Factory's album Demanufacture (1995) and Slipknot's album Iowa (2001). Characteristics A blast beat is traditionally played as an alternating single-stroke roll broken up between the kick drum and the snare drum. Blast beats are counted in 32nd or 16th notes. In a modern musical context blast beats are usually regarded as such when played at a minimum of above 90 beats per minute 32nd notes, or 180 bpm 16th notes. Early blast beats were generally quite slow and less precise compared to today's standards. Nowadays, a blast beat is normally played from 180 bpm 16th notes up to such high tempos as in the range of 250-280 bpm 16th notes (or even higher). There is also the "gravity blast", not to be confused with the one-handed gravity roll (see below). This technique uses the rim of the snare drum as a fulcrum, allowing two snare hits with one downward motion (essentially doing the work of two hands with only one). Typical blast beats consist of 8th-note patterns between both the bass and snare drum alternately, with the hi-hat or the ride synced. Variations exist such as displacing hi-hat/ride, snare and bass drum hits and/or using other cymbals such as splashes, crashes, chinas and even tambourines for accenting, for example when using odd time or playing progressively. While playing 8th or 8th note triplets some drummers choose to play in sync with one foot while others split the 8th notes between both feet. In blast beats in general, the notes on the kick drum can be played either with one foot only or by alternating both feet, referred to as a "two-foot" or "economy" blast. Variations As blast beats have evolved, different types and interpretations have emerged. There are four main variations of the blast beat: the traditional blast, the bomb blast, the hammer blast and the freehand blast. The traditional blast beat is a single-stroke roll alternating between the snare drum and kick drum. The ride hand is usually playing in unison with the kick drum. The traditional blast beat is structurally very similar to the skank beat, which can be regarded as a predecessor and a half time variation of the traditional blast beat. The skank beat originated in the early punk and thrash metal scene as a drum beat for extreme music. The skank beat is similar to the blast beat as it alternates between the kick and the snare, with the difference that the ride hand plays notes in unison with both kick and snare. A skank beat is in other words a sped up 2/4 rock or polka beat. In the US the skank beat was early on also referred to as the "Slayer" or "thrash" beat due to its popularity among thrash metal bands such as Slayer. The bomb blast is essentially a combination of blast beat and double bass drumming. When measured in 16th notes a bomb blast consists of 8th notes on the snare played above a 16th notes kick drum line. Most drummers play this beat by leading with the snare, while the traditional blast beat is usually led with the kick. The bomb blast became popular among 1990s death metal bands such as Cannibal Corpse, which is why the bomb blast is also referred to as the "Cannibal" blast. The hammer blast is played with the kick and snare in unison. Instead of playing 8th notes kick and snare in alternation and thus creating a 16th notes roll, the hammer blast is played as a straight 8th notes roll on the kick and snare simultaneously. The advantage of the hammer blast is that only one fast hand is needed, which usually is the drummer's leading hand (right for right-handed and left for left-handed). If the weaker hand can't keep up with the 8th notes snare line, it can play quarter notes. The kick drum line can be played with one foot as well as a two-footed economy blast. When played at an extremely fast tempo, the hammer blast can be referred to as a "hyper blast". The hammer blast became popular in death metal music of the early 1990s. The freehand blast, also known as the gravity blast, utilizes the gravity roll technique in a blast beat context. Of all the main blast beat variations, this one is the most recent to have emerged. The snare line is played as a 16th notes single stroke roll, also known as a gravity roll or single handed roll. The roll is played with an up and down motion in which you push and pull the drumstick on and off the snare drum. By using the snare rim as a fulcrum you create a stroke each time you push and pull the drumstick up and down. In this way, the player can double the output of notes to match the amount of notes produced by two feet on the bass drum. It usually presents similarly to a unison hammer blast, but at double the tempo of what would be possible with normal techniques. One drawback is that this blast has a limited volume. The concept behind the gravity roll is not new, but is noted for being brought into modern music by drummer Johhny Rabb. Rabb has published the book The Official Freehand Technique, which covers the gravity roll technique. The term "gravity roll" or "gravity blast," while common and accepted usage, is less correct than "freehand roll" or "fulcrum roll" in that the technique does not rely on gravity and can be played sideways, inverted, or in a zero gravity environment. A combination of the gravity blast and the bomb blast (i.e. both the kick and the snare is playing 16th notes in unison) is called a gravity bomb. Examples Examples of the four main blast beat variations in drum tab: C- x-x-x-x-x-x-x-x-| C- x-x-x-x-x-x-x-x-| C- x-x-x-x-x-x-x-x-| C- x-x-x-x-x-x-x-x-| S- o-o-o-o-o-o-o-o-| S- -o-o-o-o-o-o-o-o| S- o-o-o-o-o-o-o-o-| S- oooooooooooooooo| B- o-o-o-o-o-o-o-o-| B- o-o-o-o-o-o-o-o-| B- oooooooooooooooo| B- o-o-o-o-o-o-o-o-| The first example is a hammer blast. The second example shows a traditional blast beat - essentially a skank beat played at a high tempo (this particular one leads with the bass drum, but the snare can lead as well). Example #3 shows a blast beat with double bass, known as a bomb blast. Example #4 illustrates a freehand blast, also known as a gravity blast and is the only one that showcases the proper speed of a modern blast beat. See also Freehand roll References External links Flo Mounier's Extreme Metal DVD Johnny Rabb's Home Page Drum rudiments Drum patterns Percussion performance techniques Heavy metal performance techniques Rhythm and meter
Blast beat
[ "Physics" ]
2,688
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
59,920
https://en.wikipedia.org/wiki/Alfred%20Tarski
Alfred Tarski (, born Alfred Teitelbaum; January 14, 1901 – October 26, 1983) was a Polish-American logician and mathematician. A prolific author best known for his work on model theory, metamathematics, and algebraic logic, he also contributed to abstract algebra, topology, geometry, measure theory, mathematical logic, set theory, and analytic philosophy. Educated in Poland at the University of Warsaw, and a member of the Lwów–Warsaw school of logic and the Warsaw school of mathematics, he immigrated to the United States in 1939 where he became a naturalized citizen in 1945. Tarski taught and carried out research in mathematics at the University of California, Berkeley, from 1942 until his death in 1983. His biographers Anita Burdman Feferman and Solomon Feferman state that, "Along with his contemporary, Kurt Gödel, he changed the face of logic in the twentieth century, especially through his work on the concept of truth and the theory of models." Life Early life and education Alfred Tarski was born Alfred Teitelbaum (Polish spelling: "Tajtelbaum"), to parents who were Polish Jews in comfortable circumstances. He first manifested his mathematical abilities while in secondary school, at Warsaw's Szkoła Mazowiecka. Nevertheless, he entered the University of Warsaw in 1918 intending to study biology. After Poland regained independence in 1918, Warsaw University came under the leadership of Jan Łukasiewicz, Stanisław Leśniewski and Wacław Sierpiński and quickly became a world-leading research institution in logic, foundational mathematics, and the philosophy of mathematics. Leśniewski recognized Tarski's potential as a mathematician and encouraged him to abandon biology. Henceforth Tarski attended courses taught by Łukasiewicz, Sierpiński, Stefan Mazurkiewicz and Tadeusz Kotarbiński, and in 1924 became the only person ever to complete a doctorate under Leśniewski's supervision. His thesis was entitled O wyrazie pierwotnym logistyki (On the Primitive Term of Logistic; published 1923). Tarski and Leśniewski soon grew cool to each other, mainly due to the latter's increasing anti-semitism. However, in later life, Tarski reserved his warmest praise for Kotarbiński, which was reciprocated. In 1923, Alfred Teitelbaum and his brother Wacław changed their surname to "Tarski". The Tarski brothers also converted to Roman Catholicism, Poland's dominant religion. Alfred did so even though he was an avowed atheist. Career After becoming the youngest person ever to complete a doctorate at Warsaw University, Tarski taught logic at the Polish Pedagogical Institute, mathematics and logic at the university, and served as Łukasiewicz's assistant. Because these positions were poorly paid, Tarski also taught mathematics at the Third Boys’ Gimnazjum of the Trade Union of Polish Secondary-School Teachers (later the Stefan Żeromski Gimnazjum), a Warsaw secondary school, beginning in 1925. Before World War II, it was not uncommon for European intellectuals of research caliber to teach high school. Hence until his departure for the United States in 1939, Tarski not only wrote several textbooks and many papers, a number of them ground-breaking, but also did so while supporting himself primarily by teaching high-school mathematics. In 1929 Tarski married fellow teacher Maria Witkowska, a Pole of Catholic background. She had worked as a courier for the army in the Polish–Soviet War. They had two children; a son Jan Tarski, who became a physicist, and a daughter Ina, who married the mathematician Andrzej Ehrenfeucht. Tarski applied for a chair of philosophy at Lwów University, but on Bertrand Russell's recommendation it was awarded to Leon Chwistek. In 1930, Tarski visited the University of Vienna, lectured to Karl Menger's colloquium, and met Kurt Gödel. Thanks to a fellowship, he was able to return to Vienna during the first half of 1935 to work with Menger's research group. From Vienna he traveled to Paris to present his ideas on truth at the first meeting of the Unity of Science movement, an outgrowth of the Vienna Circle. Tarski's academic career in Poland was strongly and repeatedly impacted by his heritage. For example, in 1937, Tarski applied for a chair at Poznań University but the chair was abolished to avoid assigning it to Tarski (who was undisputedly the strongest applicant) because he was a Jew. Tarski's ties to the Unity of Science movement likely saved his life, because they resulted in his being invited to address the Unity of Science Congress held in September 1939 at Harvard University. Thus he left Poland in August 1939, on the last ship to sail from Poland for the United States before the German and Soviet invasion of Poland and the outbreak of World War II. Tarski left reluctantly, because Leśniewski had died a few months before, creating a vacancy which Tarski hoped to fill. Oblivious to the Nazi threat, he left his wife and children in Warsaw. He did not see them again until 1946. During the war, nearly all his Jewish extended family were murdered at the hands of the German occupying authorities. Once in the United States, Tarski held a number of temporary teaching and research positions: Harvard University (1939), City College of New York (1940), and thanks to a Guggenheim Fellowship, the Institute for Advanced Study in Princeton (1942), where he again met Gödel. In 1942, Tarski joined the Mathematics Department at the University of California, Berkeley, where he spent the rest of his career. Tarski became an American citizen in 1945. Although emeritus from 1968, he taught until 1973 and supervised Ph.D. candidates until his death. At Berkeley, Tarski acquired a reputation as an astounding and demanding teacher, a fact noted by many observers: Tarski supervised twenty-four Ph.D. dissertations including (in chronological order) those of Andrzej Mostowski, Bjarni Jónsson, Julia Robinson, Robert Vaught, Solomon Feferman, Richard Montague, James Donald Monk, Haim Gaifman, Donald Pigozzi, and Roger Maddux, as well as Chen Chung Chang and Jerome Keisler, authors of Model Theory (1973), a classic text in the field. He also strongly influenced the dissertations of Adolf Lindenbaum, Dana Scott, and Steven Givant. Five of Tarski's students were women, a remarkable fact given that men represented an overwhelming majority of graduate students at the time. However, he had extra-marital affairs with at least two of these students. After he showed another of his female student's work to a male colleague, the colleague published it himself, leading her to leave the graduate study and later move to a different university and a different advisor. Tarski lectured at University College, London (1950, 1966), the Institut Henri Poincaré in Paris (1955), the Miller Institute for Basic Research in Science in Berkeley (1958–60), the University of California at Los Angeles (1967), and the Pontifical Catholic University of Chile (1974–75). Among many distinctions garnered over the course of his career, Tarski was elected to the United States National Academy of Sciences, the British Academy and the Royal Netherlands Academy of Arts and Sciences in 1958, received honorary degrees from the Pontifical Catholic University of Chile in 1975, from Marseilles' Paul Cézanne University in 1977 and from the University of Calgary, as well as the Berkeley Citation in 1981. Tarski presided over the Association for Symbolic Logic, 1944–46, and the International Union for the History and Philosophy of Science, 1956–57. He was also an honorary editor of Algebra Universalis. Work in mathematics Tarski's mathematical interests were exceptionally broad. His collected papers run to about 2,500 pages, most of them on mathematics, not logic. For a concise survey of Tarski's mathematical and logical accomplishments by his former student Solomon Feferman, see "Interludes I–VI" in Feferman and Feferman. Tarski's first paper, published when he was 19 years old, was on set theory, a subject to which he returned throughout his life. In 1924, he and Stefan Banach proved that, if one accepts the Axiom of Choice, a ball can be cut into a finite number of pieces, and then reassembled into a ball of larger size, or alternatively it can be reassembled into two balls whose sizes each equal that of the original one. This result is now called the Banach–Tarski paradox. In A decision method for elementary algebra and geometry, Tarski showed, by the method of quantifier elimination, that the first-order theory of the real numbers under addition and multiplication is decidable. (While this result appeared only in 1948, it dates back to 1930 and was mentioned in Tarski (1931).) This is a very curious result, because Alonzo Church proved in 1936 that Peano arithmetic (the theory of natural numbers) is not decidable. Peano arithmetic is also incomplete by Gödel's incompleteness theorem. In his 1953 Undecidable theories, Tarski et al. showed that many mathematical systems, including lattice theory, abstract projective geometry, and closure algebras, are all undecidable. The theory of Abelian groups is decidable, but that of non-Abelian groups is not. While teaching at the Stefan Żeromski Gimnazjum in the 1920s and 30s, Tarski often taught geometry. Using some ideas of Mario Pieri, in 1926 Tarski devised an original axiomatization for plane Euclidean geometry, one considerably more concise than Hilbert's. Tarski's axioms form a first-order theory devoid of set theory, whose individuals are points, and having only two primitive relations. In 1930, he proved this theory decidable because it can be mapped into another theory he had already proved decidable, namely his first-order theory of the real numbers. In 1929 he showed that much of Euclidean solid geometry could be recast as a second-order theory whose individuals are spheres (a primitive notion), a single primitive binary relation "is contained in", and two axioms that, among other things, imply that containment partially orders the spheres. Relaxing the requirement that all individuals be spheres yields a formalization of mereology far easier to exposit than Lesniewski's variant. Near the end of his life, Tarski wrote a very long letter, published as Tarski and Givant (1999), summarizing his work on geometry. Cardinal Algebras studied algebras whose models include the arithmetic of cardinal numbers. Ordinal Algebras sets out an algebra for the additive theory of order types. Cardinal, but not ordinal, addition commutes. In 1941, Tarski published an important paper on binary relations, which began the work on relation algebra and its metamathematics that occupied Tarski and his students for much of the balance of his life. While that exploration (and the closely related work of Roger Lyndon) uncovered some important limitations of relation algebra, Tarski also showed (Tarski and Givant 1987) that relation algebra can express most axiomatic set theory and Peano arithmetic. For an introduction to relation algebra, see Maddux (2006). In the late 1940s, Tarski and his students devised cylindric algebras, which are to first-order logic what the two-element Boolean algebra is to classical sentential logic. This work culminated in the two monographs by Tarski, Henkin, and Monk (1971, 1985). Work in logic Tarski's student, Robert Lawson Vaught, has ranked Tarski as one of the four greatest logicians of all time — along with Aristotle, Gottlob Frege, and Kurt Gödel. However, Tarski often expressed great admiration for Charles Sanders Peirce, particularly for his pioneering work in the logic of relations. Tarski produced axioms for logical consequence and worked on deductive systems, the algebra of logic, and the theory of definability. His semantic methods, which culminated in the model theory he and a number of his Berkeley students developed in the 1950s and 60s, radically transformed Hilbert's proof-theoretic metamathematics. Around 1930, Tarski developed an abstract theory of logical deductions that models some properties of logical calculi. Mathematically, what he described is just a finitary closure operator on a set (the set of sentences). In abstract algebraic logic, finitary closure operators are still studied under the name consequence operator, which was coined by Tarski. The set S represents a set of sentences, a subset T of S a theory, and cl(T) is the set of all sentences that follow from the theory. This abstract approach was applied to fuzzy logic (see Gerla 2000). Tarski's 1936 article "On the concept of logical consequence" argued that the conclusion of an argument will follow logically from its premises if and only if every model of the premises is a model of the conclusion. In 1937, he published a paper presenting clearly his views on the nature and purpose of the deductive method, and the role of logic in scientific studies. His high school and undergraduate teaching on logic and axiomatics culminated in a classic short text, published first in Polish, then in German translation, and finally in a 1941 English translation as Introduction to Logic and to the Methodology of Deductive Sciences. Tarski's 1969 "Truth and proof" considered both Gödel's incompleteness theorems and Tarski's undefinability theorem, and mulled over their consequences for the axiomatic method in mathematics. Truth in formalized languages In 1933, Tarski published a very long paper in Polish, titled "Pojęcie prawdy w językach nauk dedukcyjnych", "Setting out a mathematical definition of truth for formal languages." The 1935 German translation was titled "Der Wahrheitsbegriff in den formalisierten Sprachen", "The concept of truth in formalized languages", sometimes shortened to "Wahrheitsbegriff". An English translation appeared in the 1956 first edition of the volume Logic, Semantics, Metamathematics. This collection of papers from 1923 to 1938 is an event in 20th-century analytic philosophy, a contribution to symbolic logic, semantics, and the philosophy of language. For a brief discussion of its content, see Convention T (and also T-schema). A philosophical debate examines the extent to which Tarski's theory of truth for formalized languages can be seen as a correspondence theory of truth. The debate centers on how to read Tarski's condition of material adequacy for a true definition. That condition requires that the truth theory have the following as theorems for all sentences p of the language for which truth is being defined: "p" is true if and only if p. (where p is the proposition expressed by "p") The debate amounts to whether to read sentences of this form, such as as expressing merely a deflationary theory of truth or as embodying truth as a more substantial property (see Kirkham 1992). Logical consequence In 1936, Tarski published Polish and German versions of a lecture, “On the Concept of Following Logically", he had given the preceding year at the International Congress of Scientific Philosophy in Paris. A new English translation of this paper, Tarski (2002), highlights the many differences between the German and Polish versions of the paper and corrects a number of mistranslations in Tarski (1983). This publication set out the modern model-theoretic definition of (semantic) logical consequence, or at least the basis for it. Whether Tarski's notion was entirely the modern one turns on whether he intended to admit models with varying domains (and in particular, models with domains of different cardinalities). This question is a matter of some debate in the philosophical literature. John Etchemendy stimulated much of the discussion about Tarski's treatment of varying domains. Tarski ends by pointing out that his definition of logical consequence depends upon a division of terms into the logical and the extra-logical and he expresses some skepticism that any such objective division will be forthcoming. "What are Logical Notions?" can thus be viewed as continuing "On the Concept of Logical Consequence". Logical notions Tarski's "What are Logical Notions?" (Tarski 1986) is the published version of a talk that he gave originally in 1966 in London and later in 1973 in Buffalo; it was edited without his direct involvement by John Corcoran. It became the most cited paper in the journal History and Philosophy of Logic. In the talk, Tarski proposed demarcation of logical operations (which he calls "notions") from non-logical. The suggested criteria were derived from the Erlangen program of the 19th-century German mathematician Felix Klein. Mautner (in 1946), and possibly an article by the Portuguese mathematician José Sebastião e Silva, anticipated Tarski in applying the Erlangen Program to logic. The Erlangen program classified the various types of geometry (Euclidean geometry, affine geometry, topology, etc.) by the type of one-one transformation of space onto itself that left the objects of that geometrical theory invariant. (A one-to-one transformation is a functional map of the space onto itself so that every point of the space is associated with or mapped to one other point of the space. So, "rotate 30 degrees" and "magnify by a factor of 2" are intuitive descriptions of simple uniform one-one transformations.) Continuous transformations give rise to the objects of topology, similarity transformations to those of Euclidean geometry, and so on. As the range of permissible transformations becomes broader, the range of objects one is able to distinguish as preserved by the application of the transformations becomes narrower. Similarity transformations are fairly narrow (they preserve the relative distance between points) and thus allow us to distinguish relatively many things (e.g., equilateral triangles from non-equilateral triangles). Continuous transformations (which can intuitively be thought of as transformations which allow non-uniform stretching, compression, bending, and twisting, but no ripping or glueing) allow us to distinguish a polygon from an annulus (ring with a hole in the centre), but do not allow us to distinguish two polygons from each other. Tarski's proposal was to demarcate the logical notions by considering all possible one-to-one transformations (automorphisms) of a domain onto itself. By domain is meant the universe of discourse of a model for the semantic theory of logic. If one identifies the truth value True with the domain set and the truth-value False with the empty set, then the following operations are counted as logical under the proposal: Truth-functions: All truth-functions are admitted by the proposal. This includes, but is not limited to, all n-ary truth-functions for finite n. (It also admits of truth-functions with any infinite number of places.) Individuals: No individuals, provided the domain has at least two members. Predicates: the one-place total and null predicates, the former having all members of the domain in its extension and the latter having no members of the domain in its extension two-place total and null predicates, the former having the set of all ordered pairs of domain members as its extension and the latter with the empty set as extension the two-place identity predicate, with the set of all order-pairs <a,a> in its extension, where a is a member of the domain the two-place diversity predicate, with the set of all order pairs <a,b> where a and b are distinct members of the domain n-ary predicates in general: all predicates definable from the identity predicate together with conjunction, disjunction and negation (up to any ordinality, finite or infinite) Quantifiers: Tarski explicitly discusses only monadic quantifiers and points out that all such numerical quantifiers are admitted under his proposal. These include the standard universal and existential quantifiers as well as numerical quantifiers such as "Exactly four", "Finitely many", "Uncountably many", and "Between four and 9 million", for example. While Tarski does not enter into the issue, it is also clear that polyadic quantifiers are admitted under the proposal. These are quantifiers like, given two predicates Fx and Gy, "More(x, y)", which says "More things have F than have G." Set-Theoretic relations: Relations such as inclusion, intersection and union applied to subsets of the domain are logical in the present sense. Set membership: Tarski ended his lecture with a discussion of whether the set membership relation counted as logical in his sense. (Given the reduction of (most of) mathematics to set theory, this was, in effect, the question of whether most or all of mathematics is a part of logic.) He pointed out that set membership is logical if set theory is developed along the lines of type theory, but is extralogical if set theory is set out axiomatically, as in the canonical Zermelo–Fraenkel set theory. Logical notions of higher order: While Tarski confined his discussion to operations of first-order logic, there is nothing about his proposal that necessarily restricts it to first-order logic. (Tarski likely restricted his attention to first-order notions as the talk was given to a non-technical audience.) So, higher-order quantifiers and predicates are admitted as well. In some ways the present proposal is the obverse of that of Lindenbaum and Tarski (1936), who proved that all the logical operations of Bertrand Russell's and Whitehead's Principia Mathematica are invariant under one-to-one transformations of the domain onto itself. The present proposal is also employed in Tarski and Givant (1987). Solomon Feferman and Vann McGee further discussed Tarski's proposal in work published after his death. Feferman (1999) raises problems for the proposal and suggests a cure: replacing Tarski's preservation by automorphisms with preservation by arbitrary homomorphisms. In essence, this suggestion circumvents the difficulty Tarski's proposal has in dealing with a sameness of logical operation across distinct domains of a given cardinality and across domains of distinct cardinalities. Feferman's proposal results in a radical restriction of logical terms as compared to Tarski's original proposal. In particular, it ends up counting as logical only those operators of standard first-order logic without identity. Vann McGee (1996) provides a precise account of what operations are logical in the sense of Tarski's proposal in terms of expressibility in a language that extends first-order logic by allowing arbitrarily long conjunctions and disjunctions, and quantification over arbitrarily many variables. "Arbitrarily" includes a countable infinity. Selected publications Anthologies and collections 1986. The Collected Papers of Alfred Tarski, 4 vols. Givant, S. R., and McKenzie, R. N., eds. Birkhäuser. 1983 (1956). Logic, Semantics, Metamathematics: Papers from 1923 to 1938 by Alfred Tarski, Corcoran, J., ed. Hackett. 1st edition edited and translated by J. H. Woodger, Oxford Uni. Press. This collection contains translations from Polish of some of Tarski's most important papers of his early career, including The Concept of Truth in Formalized Languages and On the Concept of Logical Consequence discussed above. Original publications of Tarski 1930 Une contribution à la théorie de la mesure. Fund Math 15 (1930), 42–50. 1930. (with Jan Łukasiewicz). "Untersuchungen uber den Aussagenkalkul" ["Investigations into the Sentential Calculus"], Comptes Rendus des seances de la Societe des Sciences et des Lettres de Varsovie, Vol, 23 (1930) Cl. III, pp. 31–32 in Tarski (1983): 38–59. 1931. "Sur les ensembles définissables de nombres réels I", Fundamenta Mathematicae 17: 210–239 in Tarski (1983): 110–142. 1936. "Grundlegung der wissenschaftlichen Semantik", Actes du Congrès international de philosophie scientifique, Sorbonne, Paris 1935, vol. III, Language et pseudo-problèmes, Paris, Hermann, 1936, pp. 1–8 in Tarski (1983): 401–408. 1936. "Über den Begriff der logischen Folgerung", Actes du Congrès international de philosophie scientifique, Sorbonne, Paris 1935, vol. VII, Logique, Paris: Hermann, pp. 1–11 in Tarski (1983): 409–420. 1936 (with Adolf Lindenbaum). "On the Limitations of Deductive Theories" in Tarski (1983): 384–92. 1937. Einführung in die Mathematische Logik und in die Methodologie der Mathematik. Springer, Wien (Vienna). 1994 (1941). Introduction to Logic and to the Methodology of Deductive Sciences. Dover. 1941. "On the calculus of relations", Journal of Symbolic Logic 6: 73–89. 1944. "The Semantical Concept of Truth and the Foundations of Semantics," Philosophy and Phenomenological Research 4: 341–75. 1948. A decision method for elementary algebra and geometry. Santa Monica CA: RAND Corp. 1949. Cardinal Algebras. Oxford Univ. Press. 1953 (with Mostowski and Raphael Robinson). Undecidable theories. North Holland. 1956. Ordinal algebras. North-Holland. 1965. "A simplified formalization of predicate logic with identity", Archiv für Mathematische Logik und Grundlagenforschung 7: 61-79 1969. "Truth and Proof", Scientific American 220: 63–77. 1971 (with Leon Henkin and Donald Monk). Cylindric Algebras: Part I. North-Holland. 1985 (with Leon Henkin and Donald Monk). Cylindric Algebras: Part II. North-Holland. 1986. "What are Logical Notions?", Corcoran, J., ed., History and Philosophy of Logic 7: 143–54. 1987 (with Steven Givant). A Formalization of Set Theory Without Variables. Vol.41 of American Mathematical Society colloquium publications. Providence RI: American Mathematical Society. . Review 1999 (with Steven Givant). "Tarski's system of geometry", Bulletin of Symbolic Logic 5: 175–214. 2002. "On the Concept of Following Logically" (Magda Stroińska and David Hitchcock, trans.) History and Philosophy of Logic 23: 155–196. See also History of philosophy in Poland Cylindric algebra Interpretability Weak interpretability List of things named after Alfred Tarski Timeline of Polish science and technology References Further reading Biographical references Patterson, Douglas. Alfred Tarski: Philosophy of Language and Logic (Palgrave Macmillan; 2012) 262 pages; biography focused on his work from the late-1920s to the mid-1930s, with particular attention to influences from his teachers Stanislaw Lesniewski and Tadeusz Kotarbinski. Logic literature The December 1986 issue of the Journal of Symbolic Logic surveys Tarski's work on model theory (Robert Vaught), algebra (Jonsson), undecidable theories (McNulty), algebraic logic (Donald Monk), and geometry (Szczerba). The March 1988 issue of the same journal surveys his work on axiomatic set theory (Azriel Levy), real closed fields (Lou Van Den Dries), decidable theory (Doner and Wilfrid Hodges), metamathematics (Blok and Pigozzi), truth and logical consequence (John Etchemendy), and general philosophy (Patrick Suppes). Blok, W. J.; Pigozzi, Don, "Alfred Tarski's Work on General Metamathematics", The Journal of Symbolic Logic, Vol. 53, No. 1 (Mar., 1988), pp. 36–50 Chang, C.C., and Keisler, H.J., 1973. Model Theory. North-Holland, Amsterdam. American Elsevier, New York. Corcoran, John, and Sagüillo, José Miguel, 2011. "The Absence of Multiple Universes of Discourse in the 1936 Tarski Consequence-Definition Paper", History and Philosophy of Logic 32: 359–80. Corcoran, John, and Weber, Leonardo, 2015. "Tarski's convention T: condition beta", South American Journal of Logic. 1, 3–32. Etchemendy, John, 1999. The Concept of Logical Consequence. Stanford CA: CSLI Publications. Gerla, G. (2000) Fuzzy Logic: Mathematical Tools for Approximate Reasoning. Kluwer Academic Publishers. Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 1870-1940. Princeton Uni. Press. Kirkham, Richard, 1992. Theories of Truth. MIT Press. Maddux, Roger D., 2006. Relation Algebras, vol. 150 in "Studies in Logic and the Foundations of Mathematics", Elsevier Science. Popper, Karl R., 1972, Rev. Ed. 1979, "Philosophical Comments on Tarski's Theory of Truth", with Addendum, Objective Knowledge, Oxford: 319–340. Smith, James T., 2010. "Definitions and Nondefinability in Geometry", American Mathematical Monthly 117:475–89. Wolenski, Jan, 1989. Logic and Philosophy in the Lvov–Warsaw School. Reidel/Kluwer. External links Stanford Encyclopedia of Philosophy: Tarski's Truth Definitions by Wilfred Hodges. Alfred Tarski by Mario Gómez-Torrente. Algebraic Propositional Logic by Ramon Jansana. Includes a fairly detailed discussion of Tarski's work on these topics. Tarski's Semantic Theory on the Internet Encyclopedia of Philosophy. 1901 births 1983 deaths 20th-century American mathematicians 20th-century American philosophers 20th-century American essayists 20th-century Polish mathematicians Jewish American atheists American logicians American male essayists American male non-fiction writers Analytic philosophers Converts to Roman Catholicism from Judaism Computability theorists Jewish American academics Jewish philosophers Linguistic turn Members of the Polish Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Members of the United States National Academy of Sciences Model theorists People from Warsaw Governorate Philosophers of language Philosophers of logic Philosophers of mathematics Philosophers of science Polish atheists Polish emigrants to the United States Polish essayists Polish logicians Polish male non-fiction writers Polish people of Jewish descent 20th-century Polish philosophers Polish set theorists Scientists from Warsaw University of California, Berkeley faculty University of California, Berkeley people University of California, Berkeley staff University of Warsaw alumni 20th-century American male writers Corresponding fellows of the British Academy
Alfred Tarski
[ "Mathematics" ]
6,593
[ "Philosophers of mathematics", "Model theorists", "Model theory" ]
59,953
https://en.wikipedia.org/wiki/Van%20Allen%20radiation%20belt
The Van Allen radiation belt is a zone of energetic charged particles, most of which originate from the solar wind, that are captured by and held around a planet by that planet's magnetosphere. Earth has two such belts, and sometimes others may be temporarily created. The belts are named after James Van Allen, who published an article describing the belts in 1958. Earth's two main belts extend from an altitude of about above the surface, in which region radiation levels vary. The belts are in the inner region of Earth's magnetic field. They trap energetic electrons and protons. Other nuclei, such as alpha particles, are less prevalent. Most of the particles that form the belts are thought to come from the solar wind while others arrive as cosmic rays. By trapping the solar wind, the magnetic field deflects those energetic particles and protects the atmosphere from destruction. The belts endanger satellites, which must have their sensitive components protected with adequate shielding if they spend significant time near that zone. Apollo astronauts going through the Van Allen belts received a very low and harmless dose of radiation. In 2013, the Van Allen Probes detected a transient, third radiation belt, which persisted for four weeks. Discovery Kristian Birkeland, Carl Størmer, Nicholas Christofilos, and Enrico Medi had investigated the possibility of trapped charged particles in 1895, forming a theoretical basis for the formation of radiation belts. The second Soviet satellite Sputnik 2 which had detectors designed by Sergei Vernov, followed by the US satellites Explorer 1 and Explorer 3, confirmed the existence of the belt in early 1958, later named after James Van Allen from the University of Iowa. The trapped radiation was first mapped by Explorer 4, Pioneer 3, and Luna 1. The term Van Allen belts refers specifically to the radiation belts surrounding Earth; however, similar radiation belts have been discovered around other planets. The Sun does not support long-term radiation belts, as it lacks a stable, global dipole field. The Earth's atmosphere limits the belts' particles to regions above 200–1,000 km, (124–620 miles) while the belts do not extend past 8 Earth radii RE. The belts are confined to a volume which extends about 65° on either side of the celestial equator. Research The NASA Van Allen Probes mission aims at understanding (to the point of predictability) how populations of relativistic electrons and ions in space form or change in response to changes in solar activity and the solar wind. NASA Institute for Advanced Concepts–funded studies have proposed magnetic scoops to collect antimatter that naturally occurs in the Van Allen belts of Earth, although only about 10 micrograms of antiprotons are estimated to exist in the entire belt. The Van Allen Probes mission successfully launched on August 30, 2012. The primary mission was scheduled to last two years with expendables expected to last four. The probes were deactivated in 2019 after running out of fuel and are expected to deorbit during the 2030s. NASA's Goddard Space Flight Center manages the Living With a Star program—of which the Van Allen Probes were a project, along with Solar Dynamics Observatory (SDO). The Applied Physics Laboratory was responsible for the implementation and instrument management for the Van Allen Probes. Radiation belts exist around other planets and moons in the solar system that have magnetic fields powerful and stable enough to sustain them. Radiation belts have been detected at Jupiter, Saturn, Uranus and Neptune through in-situ observations, such as by Galileo (spacecraft) and Juno (spacecraft) at Jupiter, Cassini–Huygens at Saturn and fly-bys from the Voyager program and Pioneer program. Observations of radio emissions from highly energetic particles that are trapped in a planets magnetic field have also been used to remotely detect radiation belts, including at Jupiter and at the ultracool dwarf LSR J1835+3259. It is possible that Mercury (planet) may be able to trap charged particles in its magnetic field, although its highly dynamic magnetosphere (which varies on the order of minutes ) may not be able to sustain stable radiation belts. Venus and Mars do not have radiation belts, as their magnetospheric configurations do not trap energetic charged particles in orbit around the planet. Geomagnetic storms can cause electron density to increase or decrease relatively quickly (i.e., approximately one day or less). Longer-timescale processes determine the overall configuration of the belts. After electron injection increases electron density, electron density is often observed to decay exponentially. Those decay time constants are called "lifetimes." Measurements from the Van Allen Probe B's Magnetic Electron Ion Spectrometer (MagEIS) show long electron lifetimes (i.e., longer than 100 days) in the inner belt; short electron lifetimes of around one or two days are observed in the "slot" between the belts; and energy-dependent electron lifetimes of roughly five to 20 days are found in the outer belt. Inner belt The inner Van Allen Belt extends typically from an altitude of 0.2 to 2 Earth radii (L values of 1.2 to 3) or to above the Earth. In certain cases, when solar activity is stronger or in geographical areas such as the South Atlantic Anomaly, the inner boundary may decline to roughly 200 km above the Earth's surface. The inner belt contains high concentrations of electrons in the range of hundreds of keV and energetic protons with energies exceeding 100 MeV—trapped by the relatively strong magnetic fields in the region (as compared to the outer belt). It is thought that proton energies exceeding 50 MeV in the lower belts at lower altitudes are the result of the beta decay of neutrons created by cosmic ray collisions with nuclei of the upper atmosphere. The source of lower energy protons is believed to be proton diffusion, due to changes in the magnetic field during geomagnetic storms. Due to the slight offset of the belts from Earth's geometric center, the inner Van Allen belt makes its closest approach to the surface at the South Atlantic Anomaly. In March 2014, a pattern resembling "zebra stripes" was observed in the radiation belts by the Radiation Belt Storm Probes Ion Composition Experiment (RBSPICE) onboard Van Allen Probes. The initial theory proposed in 2014 was that—due to the tilt in Earth's magnetic field axis—the planet's rotation generated an oscillating, weak electric field that permeates through the entire inner radiation belt. A 2016 study instead concluded that the zebra stripes were an imprint of ionospheric winds on radiation belts. Outer belt The outer belt consists mainly of high-energy (0.1–10 MeV) electrons trapped by the Earth's magnetosphere. It is more variable than the inner belt, as it is more easily influenced by solar activity. It is almost toroidal in shape, beginning at an altitude of 3 Earth radii and extending to 10 Earth radii (RE)— above the Earth's surface. Its greatest intensity is usually around 4 to 5 RE. The outer electron radiation belt is mostly produced by inward radial diffusion and local acceleration due to transfer of energy from whistler-mode plasma waves to radiation belt electrons. Radiation belt electrons are also constantly removed by collisions with Earth's atmosphere, losses to the magnetopause, and their outward radial diffusion. The gyroradii of energetic protons would be large enough to bring them into contact with the Earth's atmosphere. Within this belt, the electrons have a high flux and at the outer edge (close to the magnetopause), where geomagnetic field lines open into the geomagnetic "tail", the flux of energetic electrons can drop to the low interplanetary levels within about —a decrease by a factor of 1,000. In 2014, it was discovered that the inner edge of the outer belt is characterized by a very sharp transition, below which highly relativistic electrons (> 5MeV) cannot penetrate. The reason for this shield-like behavior is not well understood. The trapped particle population of the outer belt is varied, containing electrons and various ions. Most of the ions are in the form of energetic protons, but a certain percentage are alpha particles and O+ oxygen ions—similar to those in the ionosphere but much more energetic. This mixture of ions suggests that ring current particles probably originate from more than one source. The outer belt is larger than the inner belt, and its particle population fluctuates widely. Energetic (radiation) particle fluxes can increase and decrease dramatically in response to geomagnetic storms, which are themselves triggered by magnetic field and plasma disturbances produced by the Sun. The increases are due to storm-related injections and acceleration of particles from the tail of the magnetosphere. Another cause of variability of the outer belt particle populations is the wave-particle interactions with various plasma waves in a broad range of frequencies. On February 28, 2013, a third radiation belt—consisting of high-energy ultrarelativistic charged particles—was reported to be discovered. In a news conference by NASA's Van Allen Probe team, it was stated that this third belt is a product of coronal mass ejection from the Sun. It has been represented as a separate creation which splits the Outer Belt, like a knife, on its outer side, and exists separately as a storage container of particles for a month's time, before merging once again with the Outer Belt. The unusual stability of this third, transient belt has been explained as due to a 'trapping' by the Earth's magnetic field of ultrarelativistic particles as they are lost from the second, traditional outer belt. While the outer zone, which forms and disappears over a day, is highly variable due to interactions with the atmosphere, the ultrarelativistic particles of the third belt are thought not to scatter into the atmosphere, as they are too energetic to interact with atmospheric waves at low latitudes. This absence of scattering and the trapping allows them to persist for a long time, finally only being destroyed by an unusual event, such as the shock wave from the Sun. Flux values In the belts, at a given point, the flux of particles of a given energy decreases sharply with energy. At the magnetic equator, electrons of energies exceeding 5000 keV (resp. 5 MeV) have omnidirectional fluxes ranging from 1.2×106 (resp. 3.7×104) up to 9.4×109 (resp. 2×107) particles per square centimeter per second. The proton belts contain protons with kinetic energies ranging from about 100 keV, which can penetrate 0.6 μm of lead, to over 400 MeV, which can penetrate 143 mm of lead. Most published flux values for the inner and outer belts may not show the maximum probable flux densities that are possible in the belts. There is a reason for this discrepancy: the flux density and the location of the peak flux is variable, depending primarily on solar activity, and the number of spacecraft with instruments observing the belt in real time has been limited. The Earth has not yet experienced a solar storm of Carrington event intensity while spacecraft with the proper instruments have been available to observe the event. Radiation levels in the belts would be dangerous to humans if they were exposed for an extended period of time. The Apollo missions minimised hazards for astronauts by sending spacecraft at high speeds through the thinner areas of the upper belts, bypassing inner belts completely, except for the Apollo 14 mission where the spacecraft traveled through the heart of the trapped radiation belts. Antimatter confinement In 2011, a study confirmed earlier speculation that the Van Allen belt could confine antiparticles. The Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) experiment detected levels of antiprotons orders of magnitude higher than are expected from normal particle decays while passing through the South Atlantic Anomaly. This suggests the Van Allen belts confine a significant flux of antiprotons produced by the interaction of the Earth's upper atmosphere with cosmic rays. The energy of the antiprotons has been measured in the range from 60 to 750 MeV. The very high energy released in antimatter annihilation has led to proposals to harness these antiprotons for spacecraft propulsion. The concept relies on the development of antimatter collectors and containers. Implications for space travel Spacecraft travelling beyond low Earth orbit enter the zone of radiation of the Van Allen belts. Beyond the belts, they face additional hazards from cosmic rays and solar particle events. A region between the inner and outer Van Allen belts lies at 2 to 4 Earth radii and is sometimes referred to as the "safe zone". Solar cells, integrated circuits, and sensors can be damaged by radiation. Geomagnetic storms occasionally damage electronic components on spacecraft. Miniaturization and digitization of electronics and logic circuits have made satellites more vulnerable to radiation, as the total electric charge in these circuits is now small enough so as to be comparable with the charge of incoming ions. Electronics on satellites must be hardened against radiation to operate reliably. The Hubble Space Telescope, among other satellites, often has its sensors turned off when passing through regions of intense radiation. A satellite shielded by 3 mm of aluminium in an elliptic orbit () passing the radiation belts will receive about 2,500 rem (25 Sv) per year. (For comparison, a full-body dose of 5 Sv is deadly.) Almost all radiation will be received while passing the inner belt. The Apollo missions marked the first event where humans traveled through the Van Allen belts, which was one of several radiation hazards known by mission planners. The astronauts had low exposure in the Van Allen belts due to the short period of time spent flying through them. Astronauts' overall exposure was actually dominated by solar particles once outside Earth's magnetic field. The total radiation received by the astronauts varied from mission-to-mission but was measured to be between 0.16 and 1.14 rads (1.6 and 11.4 mGy), much less than the standard of 5 rem (50 mSv) per year set by the United States Atomic Energy Commission for people who work with radioactivity. Causes It is generally understood that the inner and outer Van Allen belts result from different processes. The inner belt is mainly composed of energetic protons produced from the decay of so-called neutrons, which are themselves the result of cosmic ray collisions in the upper atmosphere. The outer Van Allen belt consists mainly of electrons. They are injected from the geomagnetic tail following geomagnetic storms, and are subsequently energized through wave-particle interactions. In the inner belt, particles that originate from the Sun are trapped in the Earth's magnetic field. Particles spiral along the magnetic lines of flux as they move "latitudinally" along those lines. As particles move toward the poles, the magnetic field line density increases, and their "latitudinal" velocity is slowed and can be reversed, deflecting the particles back towards the equatorial region, causing them to bounce back and forth between the Earth's poles. In addition to both spiralling around and moving along the flux lines, the electrons drift slowly in an eastward direction, while the protons drift westward. The gap between the inner and outer Van Allen belts is sometimes called the "safe zone" or "safe slot", and is the location of medium Earth orbits. The gap is caused by the VLF radio waves, which scatter particles in pitch angle, which adds new ions to the atmosphere. Solar outbursts can also dump particles into the gap, but those drain out in a matter of days. The VLF radio waves were previously thought to be generated by turbulence in the radiation belts, but recent work by J.L. Green of the Goddard Space Flight Center compared maps of lightning activity collected by the Microlab 1 spacecraft with data on radio waves in the radiation-belt gap from the IMAGE spacecraft; the results suggest that the radio waves are actually generated by lightning within Earth's atmosphere. The generated radio waves strike the ionosphere at the correct angle to pass through only at high latitudes, where the lower ends of the gap approach the upper atmosphere. These results are still being debated in the scientific community. Proposed removal Draining the charged particles from the Van Allen belts would open up new orbits for satellites and make travel safer for astronauts. High Voltage Orbiting Long Tether, or HiVOLT, is a concept proposed by Russian physicist V. V. Danilov and further refined by Robert P. Hoyt and Robert L. Forward for draining and removing the radiation fields of the Van Allen radiation belts that surround the Earth. Another proposal for draining the Van Allen belts involves beaming very-low-frequency (VLF) radio waves from the ground into the Van Allen belts. Draining radiation belts around other planets has also been proposed, for example, before exploring Europa, which orbits within Jupiter's radiation belt. As of 2024, it remains uncertain if there are any negative unintended consequences to removing these radiation belts. See also Dipole model of the Earth's magnetic field L-shell List of artificial radiation belts Space weather Paramagnetism Explanatory notes Citations Additional sources Part I: Radial transport, pp. 1679–1693, ; Part II: Local acceleration and loss, pp. 1694–1713, . External links An explanation of the belts by David P. Stern and Mauricio Peredo Background: Trapped particle radiation models—Introduction to the trapped radiation belts by SPENVIS SPENVIS—Space Environment, Effects, and Education System—Gateway to the SPENVIS orbital dose calculation software The Van Allen Probes Web Site Johns Hopkins University Applied Physics Laboratory 1958 in science Articles containing video clips Geomagnetism Space physics Space plasmas
Van Allen radiation belt
[ "Physics", "Astronomy" ]
3,669
[ "Space plasmas", "Outer space", "Astrophysics", "Space physics" ]
59,958
https://en.wikipedia.org/wiki/Power%20series
In mathematics, a power series (in one variable) is an infinite series of the form where represents the coefficient of the nth term and c is a constant called the center of the series. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. In many situations, the center c is equal to zero, for instance for Maclaurin series. In such cases, the power series takes the simpler form The partial sums of a power series are polynomials, the partial sums of the Taylor series of an analytic function are a sequence of converging polynomial approximations to the function at the center, and a converging power series can be seen as a kind of generalized polynomial with infinitely many terms. Conversely, every polynomial is a power series with only finitely many non-zero terms. Beyond their role in mathematical analysis, power series also occur in combinatorics as generating functions (a kind of formal power series) and in electronic engineering (under the name of the Z-transform). The familiar decimal notation for real numbers can also be viewed as an example of a power series, with integer coefficients, but with the argument x fixed at . In number theory, the concept of p-adic numbers is also closely related to that of a power series. Examples Polynomial Every polynomial of degree can be expressed as a power series around any center , where all terms of degree higher than have a coefficient of zero. For instance, the polynomial can be written as a power series around the center as or around the center as One can view power series as being like "polynomials of infinite degree", although power series are not polynomials in the strict sense. Geometric series, exponential function and sine The geometric series formula which is valid for , is one of the most important examples of a power series, as are the exponential function formula and the sine formula valid for all real x. These power series are examples of Taylor series (or, more specifically, of Maclaurin series). On the set of exponents Negative powers are not permitted in an ordinary power series; for instance, is not considered a power series (although it is a Laurent series). Similarly, fractional powers such as are not permitted; fractional powers arise in Puiseux series. The coefficients must not depend on thus for instance is not a power series. Radius of convergence A power series is convergent for some values of the variable , which will always include since and the sum of the series is thus for . The series may diverge for other values of , possibly all of them. If is not the only point of convergence, then there is always a number with such that the series converges whenever and diverges whenever . The number is called the radius of convergence of the power series; in general it is given as or, equivalently, This is the Cauchy–Hadamard theorem; see limit superior and limit inferior for an explanation of the notation. The relation is also satisfied, if this limit exists. The set of the complex numbers such that is called the disc of convergence of the series. The series converges absolutely inside its disc of convergence and it converges uniformly on every compact subset of the disc of convergence. For , there is no general statement on the convergence of the series. However, Abel's theorem states that if the series is convergent for some value such that , then the sum of the series for is the limit of the sum of the series for where is a real variable less than that tends to . Operations on power series Addition and subtraction When two functions f and g are decomposed into power series around the same center c, the power series of the sum or difference of the functions can be obtained by termwise addition and subtraction. That is, if and then The sum of two power series will have a radius of convergence of at least the smaller of the two radii of convergence of the two series, but possibly larger than either of the two. For instance it is not true that if two power series and have the same radius of convergence, then also has this radius of convergence: if and , for instance, then both series have the same radius of convergence of 1, but the series has a radius of convergence of 3. Multiplication and division With the same definitions for and , the power series of the product and quotient of the functions can be obtained as follows: The sequence is known as the Cauchy product of the sequences and For division, if one defines the sequence by then and one can solve recursively for the terms by comparing coefficients. Solving the corresponding equations yields the formulae based on determinants of certain matrices of the coefficients of and Differentiation and integration Once a function is given as a power series as above, it is differentiable on the interior of the domain of convergence. It can be differentiated and integrated by treating every term separately since both differentiation and integration are linear transformations of functions: Both of these series have the same radius of convergence as the original series. Analytic functions A function f defined on some open subset U of R or C is called analytic if it is locally given by a convergent power series. This means that every a ∈ U has an open neighborhood V ⊆ U, such that there exists a power series with center a that converges to f(x) for every x ∈ V. Every power series with a positive radius of convergence is analytic on the interior of its region of convergence. All holomorphic functions are complex-analytic. Sums and products of analytic functions are analytic, as are quotients as long as the denominator is non-zero. If a function is analytic, then it is infinitely differentiable, but in the real case the converse is not generally true. For an analytic function, the coefficients an can be computed as where denotes the nth derivative of f at c, and . This means that every analytic function is locally represented by its Taylor series. The global form of an analytic function is completely determined by its local behavior in the following sense: if f and g are two analytic functions defined on the same connected open set U, and if there exists an element such that for all , then for all . If a power series with radius of convergence r is given, one can consider analytic continuations of the series, that is, analytic functions f which are defined on larger sets than and agree with the given power series on this set. The number r is maximal in the following sense: there always exists a complex number with such that no analytic continuation of the series can be defined at . The power series expansion of the inverse function of an analytic function can be determined using the Lagrange inversion theorem. Behavior near the boundary The sum of a power series with a positive radius of convergence is an analytic function at every point in the interior of the disc of convergence. However, different behavior can occur at points on the boundary of that disc. For example: Divergence while the sum extends to an analytic function: has radius of convergence equal to and diverges at every point of . Nevertheless, the sum in is , which is analytic at every point of the plane except for . Convergent at some points divergent at others: has radius of convergence . It converges for , while it diverges for . Absolute convergence at every point of the boundary: has radius of convergence , while it converges absolutely, and uniformly, at every point of due to Weierstrass M-test applied with the hyper-harmonic convergent series . Convergent on the closure of the disc of convergence but not continuous sum: Sierpiński gave an example of a power series with radius of convergence , convergent at all points with , but the sum is an unbounded function and, in particular, discontinuous. A sufficient condition for one-sided continuity at a boundary point is given by Abel's theorem. Formal power series In abstract algebra, one attempts to capture the essence of power series without being restricted to the fields of real and complex numbers, and without the need to talk about convergence. This leads to the concept of formal power series, a concept of great utility in algebraic combinatorics. Power series in several variables An extension of the theory is necessary for the purposes of multivariable calculus. A power series is here defined to be an infinite series of the form where is a vector of natural numbers, the coefficients are usually real or complex numbers, and the center and argument are usually real or complex vectors. The symbol is the product symbol, denoting multiplication. In the more convenient multi-index notation this can be written where is the set of natural numbers, and so is the set of ordered n-tuples of natural numbers. The theory of such series is trickier than for single-variable series, with more complicated regions of convergence. For instance, the power series is absolutely convergent in the set between two hyperbolas. (This is an example of a log-convex set, in the sense that the set of points , where lies in the above region, is a convex set. More generally, one can show that when c=0, the interior of the region of absolute convergence is always a log-convex set in this sense.) On the other hand, in the interior of this region of convergence one may differentiate and integrate under the series sign, just as one may with ordinary power series. Order of a power series Let be a multi-index for a power series . The order of the power series f is defined to be the least value such that there is aα ≠ 0 with , or if f ≡ 0. In particular, for a power series f(x) in a single variable x, the order of f is the smallest power of x with a nonzero coefficient. This definition readily extends to Laurent series. Notes References External links Powers of Complex Numbers by Michael Schreiber, Wolfram Demonstrations Project. Real analysis Complex analysis Multivariable calculus Mathematical series
Power series
[ "Mathematics" ]
2,053
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Multivariable calculus" ]
59,988
https://en.wikipedia.org/wiki/Aconitum
Aconitum (), also known as aconite, monkshood, wolfsbane, leopard's bane, devil's helmet, or blue rocket, is a genus of over 250 species of flowering plants belonging to the family Ranunculaceae. These herbaceous perennial plants are chiefly native to the mountainous parts of the Northern Hemisphere in North America, Europe, and Asia, growing in the moisture-retentive but well-draining soils of mountain meadows. Most Aconitum species are extremely poisonous and must be handled very carefully. Several Aconitum hybrids, such as the Arendsii form of Aconitum carmichaelii, have won gardening awards—such as the Royal Horticultural Society's Award of Garden Merit. Some are used by florists. Etymology The name aconitum comes from the Greek word , which may derive from the Greek akon for dart or javelin, the tips of which were poisoned with the substance, or from akonae, because of the rocky ground on which the plant was thought to grow. The Greek name lycoctonum, which translates literally to "wolf's bane", is thought to indicate the use of its juice to poison arrows or baits used to kill wolves. The English name monkshood refers to the cylindrical helmet, called the galea, distinguishing the flower. Description The dark green leaves of Aconitum species lack stipules. They are palmate or deeply palmately lobed with five to seven segments. Each segment again is trilobed with coarse sharp teeth. The leaves have a spiral (alternate) arrangement. The lower leaves have long petioles. The tall, erect stem is crowned by racemes of large blue, purple, white, yellow, or pink zygomorphic flowers with numerous stamens. They are distinguishable by having one of the five petaloid sepals (the posterior one), called the galea, in the form of a cylindrical helmet, hence the English name monkshood. Two to 10 petals are present. The two upper petals are large and are placed under the hood of the calyx and are supported on long stalks. They have a hollow spur at their apex, containing the nectar. The other petals are small and scale-like or nonforming. The three to five carpels are partially fused at the base. The fruit is an aggregate of follicles, a follicle being a dry, many-seeded structure. Unlike with many species from genera (and their hybrids) in Ranunculaceae (and the related Papaveroideae subfamily), there are no double-flowered forms. Color range A medium to dark semi-saturated blue-purple is the typical flower color for Aconitum species. Aconitum species tend to be variable enough in form and color in the wild to cause debate and confusion among experts when it comes to species classification boundaries. The overall color range of the genus is rather limited, although the palette has been extended a small amount with hybridization. In the wild, some Aconitum blue-purple shades can be very dark. In cultivation the shades do not reach this level of depth. Aside from blue-purple—white, very pale greenish-white, creamy white, and pale greenish-yellow are also somewhat common in nature. Wine red (or red-purple) occurs in a hybrid of the climber Aconitum hemsleyanum. There is a pale semi-saturated pink produced by cultivation as well as bicolor hybrids (e.g. white centers with blue-purple edges). Purplish shades range from very dark blue-purple to a very pale lavender that is quite greyish. The latter occurs in the "Stainless Steel" hybrid. Neutral blue (rather than purplish or greenish), greenish-blue, and intense blues, available in some related Delphinium plants—particularly Delphinium grandiflorum—do not occur in this genus. Aconitum plants that have purplish-blue flowers are often inaccurately referred to as having blue flowers, even though the purple tone dominates. If there are species with true (neutral) blue or greenish-blue flowers they are rare and do not occur in cultivation. Also unlike the genus Delphinium, there are no bright red nor intense pink Aconitum flowers, as none known are pollinated by hummingbirds. There are no orange-flowered varieties nor any that are green. Aconitum is typically more intense in color than Helleborus but less intense than Delphinium. There are no blackish flowers in Aconitum, unlike with Helleborus. Monkshood (Aconitum napellus) produces light indigo-blue flowers, while Wolf's Bane (Aconitum vulparia) produces whitish or straw-yellow flowers. Horticultural trade morphology The lack of double-flowered forms in the horticultural trade stands in contrast with the other genera of Ranunculaceae used regularly in gardens. This includes one major genus that is known solely by most gardeners for a double-flowered form of one species—Ranunculus asiaticus, known colloquially in the trade as "Ranunculus". The Ranunculus genus contains approximately 500 species. One other species of Ranunculus has seen minor use in gardens, the 'Flore Pleno' (doubled) form of Ranunculus acris. Doubled forms of Consolida and Delphinium dominate the horticultural trade while single forms of Anemone, Aquilegia, Clematis, Helleborus, Pulsatilla—and the related Papaver—retain some popularity. No doubled forms of Aconitum are known. Ecology Aconitum species have been recorded as food plant of the caterpillars of several moths. The yellow tiger moth Arctia flavia, and the purple-shaded gem Euchalcia variabilis are at home on A. vulparia. The engrailed Ectropis crepuscularia, yellow-tail Euproctis similis, mouse moth Amphipyra tragopoginis, pease blossom Periphanes delphinii, and Mniotype bathensis, have been observed feeding on A. napellus. The purple-lined sallow Pyrrhia exprimens, and Blepharita amica were found eating from A. septentrionale. The dot moth Melanchra persicariae occurs both on A. septentrionale and A. intermedium. The golden plusia Polychrysia moneta is hosted by A. vulparia, A. napellus, A. septentrionale, and A. intermedium. Other moths associated with Aconitum species include the wormwood pug Eupithecia absinthiata, satyr pug E. satyrata, Aterpia charpentierana, and A. corticana. It is also the primary food source for the Old World bumblebees Bombus consobrinus and Bombus gerstaeckeri. Aconitum flowers are pollinated by long-tongued bumblebees. Bumblebees have the strength to open the flowers and reach the single nectary at the top of the flower on its inside. Some short-tongued bees will bore holes into the tops of the flowers to steal nectar. However, alkaloids in the nectar function as a deterrent for species unsuited to pollination. The effect is greater in certain species, such as Aconitum napellus, than in others, such as Aconitum lycoctonum. Unlike the species with blue-purple flowers such as A. napellus, A. lycoctonum—which has off-white to pale yellow flowers, has been found to be a nectar source for butterflies. This is likely due to the nectary flowers of the latter being more easily reachable by the butterflies; however, the differing alkaloid character of the two plants may also play a significant role or be the primary influence. Cultivation The species typically utilized by gardeners fare well in well-drained evenly moist "humus-rich" garden soils like many in the related Helleborus and Delphinium genera, and can grow in the partial shade. Species not used in gardens tend to require more exacting conditions (e.g. Aconitum noveboracense). Most Aconitum species prefer to have their roots cool and moist, with the majority of the leaves exposed to sun, like the related Clematis. Aconitum species can be propagated by divisions of the root or by seeds, with care taken to avoid leaving pieces of the root where livestock might be poisoned. All parts of these plants should be handled while wearing protective disposable gloves. Aconitum plants are typically much longer-lived than the closely related delphinium plants, putting less energy into floral reproduction. As a result, they are not described as being "heavy feeders" (needing a higher quantity of fertilizer versus most other flowering plants)—unlike gardeners' delphiniums. As with most in the Ranunculaceae and Papaveraceae families, they dislike root disturbance. As with most in Ranunculaceae, seeds that are not planted soon after harvesting should be stored moist-packed in vermiculite to avoid dormancy and viability issues. The German seed company Jelitto offers "Gold Nugget" seeds that are advertised as utilizing a coating that enables the seed to germinate immediately, bypassing the double dormancy defect (from a typical gardener's point of view) Aconitum—and many other species in Ranunculaceae genera—use as a reproductive strategy. By contrast, seeds that are not immediately planted or moist-packed are described as perhaps taking as long as two years to germinate, being prone to very erratic germination (in terms of time required per seed), and comparatively quick seed viability loss (e.g. Adonis). These issues are typical for many species in Ranunculaceae, such as Pulsatilla (pasqueflower). Award-winning hybrids In the UK, the following have gained the Royal Horticultural Society's Award of Garden Merit: A. × cammarum 'Bicolor' A. carmichaelii 'Arendsii' A. carmichaelii 'Kelmscott' A. 'Bressingham Spire' A. 'Spark's Variety' A. 'Stainless Steel' Toxicology Monkshood and other members of the genus Aconitum contain substantial amounts of the highly toxic aconitine and related alkaloids, especially in their roots and tubers. As little as 2 mg of aconitine or 1 g of plant may cause death from respiratory paralysis or heart failure. Aconitine is a potent neurotoxin and cardiotoxin that causes persistent depolarization of neuronal sodium channels in tetrodotoxin-sensitive tissues. The influx of sodium through these channels and the delay in their repolarization increases their excitability and may lead to diarrhea, convulsions, ventricular arrhythmia, and death. Marked symptoms may appear almost immediately, usually not later than one hour, and "with large doses death is almost instantaneous". Death usually occurs within two to six hours in fatal poisoning (20 to 40 ml of tincture may prove fatal). The initial signs are gastrointestinal, including nausea, vomiting, and diarrhea. This is followed by a sensation of burning, tingling, and numbness in the mouth and face, and of burning in the abdomen. In severe poisonings, pronounced motor weakness occurs and cutaneous sensations of tingling and numbness spread to the limbs. Cardiovascular features include hypotension, sinus bradycardia, and ventricular arrhythmias. Other features may include sweating, dizziness, difficulty in breathing, headache, and confusion. The main causes of death are ventricular arrhythmias and asystole, or paralysis of the heart or respiratory center. The only post mortem signs are those of asphyxia. Treatment of poisoning is mainly supportive. All patients require close monitoring of blood pressure and cardiac rhythm. Gastrointestinal decontamination with activated charcoal can be used if given within one hour of ingestion. The major physiological antidote is atropine, which is used to treat bradycardia. Other drugs used for ventricular arrhythmia include lidocaine, amiodarone, bretylium, flecainide, procainamide, and mexiletine. Cardiopulmonary bypass is used if symptoms are refractory to treatment with these drugs. Successful use of charcoal hemoperfusion has been claimed in patients with severe aconitine poisoning. Mild toxicity (headache, nausea and palpitations) as well as severe toxicity may be experienced from skin contact. Paraesthesia, including tingling and feelings of coldness in the face and extremities, is common in reports of toxicity. Uses Folk medicine Aconite was described in Greek and Roman folk medicine by Theophrastus, Dioscorides, and Pliny the Elder, Folk medicinal use of Aconitum species is practiced in some parts of Slovenia. Aconitum chasmanthum is listed as critically endangered, Aconitum heterophyllum as endangered, and Aconitum violaceum as vulnerable due to overcollection for use as an herbal medicine. A producer of Yunnan Baiyao, a traditional Chinese medicine remedy, has disclosed the remedy contains aconite. As a poison The roots of A. ferox supply the Nepalese poison called bikh, bish, or nabee. It contains large quantities of the alkaloid pseudaconitine, which is a deadly poison. The root of A. luridum, of the Himalaya, is said to be as poisonous as that of A. ferox or A. napellus. Several species of Aconitum have been used as arrow poisons. The Minaro in Ladakh use A. napellus on their arrows to hunt ibex, while the Ainu in Japan used a species of Aconitum to hunt bear as did the Matagi hunters of the same region before their adoption of firearms. The Chinese also used Aconitum poisons both for hunting and for warfare. Aconitum poisons were used by the Aleuts of Alaska's Aleutian Islands for hunting whales. Usually, one man in a kayak armed with a poison-tipped lance would hunt the whale, paralyzing it with the poison and causing it to drown. Aconitum tipped arrows are also described in the Rig Veda. It has, albeit rarely, been hypothesized that Socrates was executed via an extract from an Aconitum species, such as Aconitum napellus, rather than via hemlock, Conium maculatum. Aconitum was commonly used by the ancient Greeks as an arrow poison but can be used for other forms of poisoning. It has been hypothesized that Alexander the Great and Ptolemy XIV Philopator were murdered via aconite. In a review of Alisha Rankin's The Poison Trials, Alison Abbott, writing in Nature, reports Rankin's proposal of 1524 as the first human trial with a control arm, indicating the book's description of a 16th century source presenting Pope Clement VII poisoning a pair of prisoners with aconite-laced marzipan, testing an antidote on one that survived, leaving the untreated prisoner to suffer a painful death. In April 2021, the president of Kyrgyzstan, Sadyr Japarov, promoted aconite root as a treatment for COVID-19. Subsequently, at least four people were admitted to hospital suffering from poisoning. Facebook had previously removed the President's posts advocating use of the substance, saying "We've removed this post as we do not allow anyone, including elected officials, to share misinformation that could lead to imminent physical harm or spread false claims about how to cure or prevent COVID-19". Taxonomy Genetic analysis suggests that Aconitum as it was delineated before the 21st century is nested within Delphinium sensu lato, that also includes Aconitella, Consolida, Delphinium staphisagria, D. requini, and D. pictum. Further genetic analysis has shown that the only species of the subgenus "Aconitum (Gymnaconitum), "A. gymnandrum, is sister to the group that consists of Delphinium (Delphinium), Delphinium (Delphinastrum), and "Consolida plus "Aconitella. To make Aconitum monophyletic, "A. gymnandrum has now been reassigned to a new genus, Gymnaconitum. To make Delphinium monophyletic, the new genus Staphisagria was erected containing S. staphisagria, S. requini, and S. pictum. Selected species Aconitum anthora (yellow monkshood) Aconitum anthoroideum Aconitum bucovinense Aconitum carmichaelii (Carmichael's monkshood) Aconitum columbianum (western monkshood) Aconitum coreanum Aconitum degenii Aconitum delphinifolium (larkspurleaf monkshood) Aconitum ferox (Indian aconite) Aconitum firmum Aconitum fischeri (Fischer monkshood) Aconitum flavum (Fluff iron hammer) Aconitum hemsleyanum (climbing monkshood) Aconitum henryi (Sparks variety monkshood) Aconitum heterophyllum Aconitum infectum (Arizona monkshood) Aconitum jacquinii (synonym of A. anthora) Aconitum koreanum Aconitum kusnezoffii (Kusnezoff monkshood) Aconitum lamarckii Aconitum lasiostomum Aconitum lycoctonum (northern wolfsbane) Aconitum maximum (Kamchatka aconite) Aconitum napellus Aconitum noveboracense (northern blue monkshood) Aconitum plicatum Aconitum reclinatum (trailing white monkshood) Aconitum rogoviczii Aconitum septentrionale Aconitum soongaricum Aconitum sukaczevii Aconitum tauricum Aconitum uncinatum (southern blue monkshood) Aconitum variegatum Aconitum violaceum Aconitum vulparia (wolf's bane) In literature and popular culture Aconite and wolfsbane have been understood to be poisonous from ancient times, and are frequently represented as such in literature. In Greek mythology, the goddess Hecate is said to have invented aconite, which Athena used to transform Arachne into a spider. Medea is also said to have attempted to poison Theseus with a cup of wine poisoned with wolf's bane. In the poem Metamorphoses, Ovid tells of the herb coming from the slavering mouth of Cerberus, the three-headed dog that guarded the gates of Hades. In his Natural History, Pliny the Elder supports the legend that aconite came from the saliva of the dog Cerberus when Hercules dragged him from the underworld. As the veterinary historian John Blaisdell has noted, symptoms of aconite poisoning in humans bear similarity to those of rabies: frothy saliva, impaired vision, vertigo, and finally, coma; thus, ancient Greeks could have believed that this poison, mythically born of Cerberus's lips, was literally the same as found inside the mouth of a rabid dog. In popular culture Early examples As a well-known poison from ancient times, aconite (including as wolfsbane, in its various spellings) often found place in historical fiction. In I, Claudius, Livia, wife of Augustus, was portrayed discussing the merits, antidotes, and use of aconite with a poisoner. It is the poison used by a murderer in the third of the Cadfael Chronicles, Monk's Hood by Ellis Peters, published in 1980 and set in 1138 in Shrewsbury, England. The kyōgen (traditional Japanese comedy) play , which is well-known and frequently taught in Japan, is centered on dried aconite root used for traditional Chinese medicine. Taken from Shasekishu, a 13th-century anthology collected by Mujū, the story describes servants who decide that the dried aconite root is really sugar, and suffer unpleasant though nonlethal symptoms after eating it. In the 16th century, Shakespeare, writing in Henry IV Part II Act 4 Scene 4, refers to aconite, alongside rash gunpowder, working as strongly as the "venom of suggestion" to break up close relationships. 20th century and later And an overdose of aconite was the method by which Rudolph Bloom, father of Leopold Bloom in James Joyce's Ulysses, died by suicide. In the 1931 classic horror film Dracula starring Bela Lugosi as Count Dracula and Helen Chandler as Mina Seward, reference is made to wolf's bane (aconitum); towards the end of the film, "Van Helsing holds up a sprig of wolf's bane". Van Helsing educates the nurse protecting Mina from Count Dracula to place sprigs of wolf's bane around Mina's neck for protection, instructing that wolf's bane, a plant that grows in Central Europe, is used by those dwelling there to protect themselves against vampires. In the 1941 film The Wolf Man starring Lon Chaney Jr. and Claude Rains, the following poem is recited several times:Even a man who is pure in heart and says his prayers by night, may become a wolf when the wolf-bane blooms and the autumn moon is bright. In the 1943 French novel Our Lady of the Flowers, the boy Culafroy eats "Napel aconite", so that the "Renaissance would take possession of the child through the mouth." Aconite and wolfsbane have also appeared in a references in modern settings. In the early 1980s, famed Spanish horror film star Paul Naschy named his production company "Aconito Films", an in-joke relating to the large number of werewolf movies he produced. In the 2003 Korean television series Dae Jang Geum, set in the 15th and 16th centuries, Choi put "wolf's bane" in the previous queen's food. In the 1980 novel Monk's-Hood, third in Ellis Peters' series The Cadfael Chronicles and set in 1138, a wealthy donator to Shrewsbury Abbey, Gervase Bonel, is murdered with stolen Monks-hood liniment prepared by the Abbey's herbalist Brother Cadfael, who needs to identify the true culprit to exonerate Bonel's stepson Edwin. In the Harry Potter series by J.K. Rowling, describing aconitum is one of three questions that Professor Snape asks Harry Potter during his first Potions class in the first novel. Snape's preparations of the drug as a treatment for lycanthropy are also an important plot point in the third novel. This family of poisons makes a showing in S. M. Stirling's 2000 science fiction novel, On the Oceans of Eternity, where a renegade warlord is poisoned with aconite-laced food by his own chief of internal security. In the 2000s television show Merlin, the titular character attempts to poison Arthur with aconite while under a spell. In the 2010s TV series Forever, Dr. Henry Morgan identifies the plants in the villain's greenhouse as specifically Aconitum variegatum, which he has used to create a poison to release into the ventilation system of Grand Central Terminal. In the television series Game of Thrones (2011-2019), a Tywin Lannister's commander is assassinated by a dart, identified by Tywin as "Wolf's Bane" due to its scent. In the second season of the BBC drama Shakespeare and Hatherway, episode 9, a tennis player is poisoned through the skin of his palm by aconite smeared on the handle of his racquet. In the 2024 Netflix thriller Carry-On, the Traveller (played by Jason Bateman) murders some of his targets by poisoning them with aconitum. In mysticism Wolf's bane is used as an analogy for the power of divine communion in Liber 65 1:13–16, one of Aleister Crowley's Holy Books of Thelema. Wolf's bane is mentioned in one verse of Lady Gwen Thompson's 1974 poem "Rede of the Wiccae", a long version of the Wiccan Rede: "Widdershins go when Moon doth wane, And the werewolves howl by the dread wolfsbane." Gallery See also Rufus T. Bush, industrial tycoon who died of accidental aconite poisoning References External links James Grout: Aconite Poisoning, part of the Encyclopædia Romana Photographs of Aconite plants Jepson Eflora entry for Aconitum Neurotoxins Plant toxins Ranunculaceae genera
Aconitum
[ "Chemistry" ]
5,273
[ "Neurochemistry", "Neurotoxins", "Chemical ecology", "Plant toxins" ]
59,990
https://en.wikipedia.org/wiki/Aconitine
Aconitine is an alkaloid toxin produced by various plant species belonging to the genus Aconitum (family Ranunculaceae), commonly known by the names wolfsbane and monkshood. Aconitine is notorious for its toxic properties. Structure and reactivity Biologically active isolates from Aconitum and Delphinium plants are classified as norditerpenoid alkaloids, which are further subdivided based on the presence or absence of the C18 carbon. Aconitine is a C19-norditerpenoid, based on its presence of this C18 carbon. It is barely soluble in water, but very soluble in organic solvents such as chloroform or diethyl ether. Aconitine is also soluble in mixtures of alcohol and water if the concentration of alcohol is high enough. Like many other alkaloids, the basic nitrogen atom in one of the six-membered ring structure of aconitine can easily form salts and ions, giving it affinity for both polar and lipophilic structures (such as cell membranes and receptors) and making it possible for the molecule to pass the blood–brain barrier. The acetoxyl group at the c8 position can readily be replaced by a methoxy group, by heating aconitine in methanol, to produce a 8-deacetyl-8-O-methyl derivatives. If aconitine is heated in its dry state, it undergoes a pyrolysis to form pyroaconitine ((1α,3α,6α,14α,16β)-20-ethyl-3,13-dihydroxy-1,6,16-trimethoxy-4-(methoxymethyl)-15-oxoaconitan-14-yl benzoate) with the chemical formula C32H43NO9. Mechanism of action Aconitine can interact with the voltage-dependent sodium-ion channels, which are proteins in the cell membranes of excitable tissues, such as cardiac and skeletal muscles and neurons. These proteins are highly selective for sodium ions. They open very quickly to depolarize the cell membrane potential, causing the upstroke of an action potential. Normally, the sodium channels close very rapidly, but the depolarization of the membrane potential causes the opening (activation) of potassium channels and potassium efflux, which results in repolarization of the membrane potential. Aconitine binds to the channel at the neurotoxin binding site 2 on the alpha subunit (the same site bound by batrachotoxin, veratridine, and grayanotoxin). This binding results in a sodium-ion channel that stays open longer. Aconitine suppresses the conformational change in the sodium-ion channel from the active state to the inactive state. The membrane stays depolarized due to the constant sodium influx (which is 10–1000-fold greater than the potassium efflux). As a result, the membrane cannot be repolarized. The binding of aconitine to the channel also leads to the channel to change conformation from the inactive state to the active state at a more negative voltage. In neurons, aconitine increases the permeability of the membrane for sodium ions, resulting in a huge sodium influx in the axon terminal. As a result, the membrane depolarizes rapidly. Due to the strong depolarization, the permeability of the membrane for potassium ions increases rapidly, resulting in a potassium reflux to release the positive charge out of the cell. Not only the permeability for potassium ions but also the permeability for calcium ions increases as a result of the depolarization of the membrane. A calcium influx takes place. The increase of the calcium concentration in the cell stimulates the release of the neurotransmitter acetylcholine into the synaptic cleft. Acetylcholine binds to acetylcholine receptors at the postsynaptic membrane to open the sodium-channels there, generating a new action potential. Research with mouse nerve-hemidiaphragm muscle preparation indicate that at low concentrations (<0.1 μM) aconitine increases the electrically evoked acetylcholine release causing an induced muscle tension. Action potentials are generated more often at this concentration. At higher concentration (0.3–3 μM) aconitine decreases the electrically evoked acetylcholine release, resulting in a decrease in muscle tension. At high concentration (0.3–3 μM), the sodium-ion channels are constantly activated, transmission of action potentials is suppressed, leading to non-excitable target cells or paralysis. Biosynthesis and total synthesis of related alkaloids Aconitine is biosynthesized by the monkshood plant via the terpenoid biosynthesis pathway (MEP chloroplast pathway). Approximately 700 naturally occurring C19-diterpenoid alkaloids have been isolated and identified, but the biosynthesis of only a few of these alkaloids are well understood. Likewise, only a few alkaloids of the aconitine family have been synthesized in the laboratory. In particular, despite over one hundred years having elapsed since its isolation, the prototypical member of its family of norditerpenoid alkaloids, aconitine itself, represents a rare example of a well-known natural product that has yet to succumb to efforts towards its total synthesis. The challenge that aconitine poses to synthetic organic chemists is due to both the intricate interlocking hexacyclic ring system that makes up its core and the elaborate collection of oxygenated functional groups at its periphery. A handful of simpler members of the aconitine alkaloids, however, have been prepared synthetically. In 1971, the Weisner group discovered the total synthesis of talatisamine (a C19-norditerpenoid). In the subsequent years, they also discovered the total syntheses of other C19-norditerpenoids, such as chasmanine, and 13-deoxydelphonine. The total synthesis of napelline (Scheme a) begins with aldehyde 100. In a 7 step process, the A-ring of napelline is formed (104). It takes another 10 steps to form the lactone ring in the pentacyclic structure of napelline (106). An additional 9 steps creates the enone-aldehyde 107. Heating in methanol with potassium hydroxide causes an aldol condensation to close the sixth and final ring in napelline (14). Oxidation then gives rise to diketone 108 which was converted to (±)-napelline (14) in 10 steps. A similar process is demonstrated in Wiesner's synthesis of 13-desoxydelphinone (Scheme c). The first step of this synthesis is the generation of a conjugated dienone 112 from 111 in 4 steps. This is followed by the addition of a benzyl vinyl ether to produce 113. In 11 steps, this compound is converted to ketal 114. The addition of heat, DMSO and o-xylene rearranges this ketol (115), and after 5 more steps (±)-13-desoxydelphinone (15) is formed. Lastly, talatisamine (Scheme d) is synthesized from diene 116 and nitrile 117. The first step is to form tricycle 118 in 16 steps. After another 6 steps, this compound is converted to enone 120. Subsequently, this allene is added to produce photoadduct 121. This adduct group is cleaved and rearrangement gives rise to the compound 122. In 7 steps, this compound forms 123, which is then rearranged, in a similar manner to compound 114, to form the aconitine-like skeleton in 124. A racemic relay synthesis is completed to produce talatisamine (13). More recently, the laboratory of the late David Y. Gin completed the total syntheses of the aconitine alkaloids nominine and neofinaconitine. Metabolism Aconitine is metabolized by cytochrome P450 isozymes (CYPs). There has been research in 2011 in China to investigate in-depth the CYPs involved in aconitine metabolism in human liver microsomes. It has been estimated that more than 90 percent of currently available human drug metabolism can be attributed to eight main enzymes (CYP 1A2, 2C9, 2C8, 2C19, 2D6, 2E1, 3A4, 3A5). The researchers used recombinants of these eight different CYPs and incubated it with aconitine. To initiate the metabolism pathway the presence of NADPH was needed. Six CYP-mediated metabolites (M1–M6) were found by liquid chromatography, these six metabolites were characterized by mass-spectrometry. The six metabolites and the involved enzymes are summarized in the following table: Selective inhibitors were used to determine the involved CYPs in the aconitine metabolism. The results indicate that aconitine was mainly metabolized by CYP3A4, 3A5 and 2D6. CYP2C8 and 2C9 had a minor role to the aconitine metabolism, whereas CYP1A2, 2E1 and 2C19 did not produce any aconitine metabolites at all. The proposed metabolic pathways of aconitine in human liver microsomes and the CYPs involved to it are summarized in the table above. Uses Aconitine was previously used as an antipyretic and analgesic and still has some limited application in herbal medicine, although the narrow therapeutic index makes calculating appropriate dosage difficult. Aconitine is also present in Yunnan Baiyao, a proprietary traditional Chinese medicine. Toxicity Consuming as little as 2 milligrams of pure aconitine or 1 gram of the plant itself may cause death by paralyzing respiratory or heart functions. Toxicity may occur through the skin; even touching the flowers can numb finger tips. The toxic effects of aconitine have been tested in a variety of animals, including mammals (dog, cat, guinea pig, mouse, rat and rabbit), frogs and pigeons. Depending on the route of exposure, the observed toxic effects were local anesthetic effect, diarrhea, convulsions, arrhythmias or death. According to a review of different reports of aconite poisoning in humans, the following clinical features were observed: Neurological: paresthesia and numbness of face, perioral area and four limbs; muscle weakness in four limbs Cardiovascular: hypotension, palpitations, chest pain, bradycardia, sinus tachycardia, ventricular ectopics and other arrhythmias, ventricular arrhythmias, and junctional rhythm Gastrointestinal: nausea, vomiting, abdominal pain, and diarrhea Others: dizziness, hyperventilation, sweating, difficulty breathing, confusion, headache, and lacrimation Progression of symptoms: the first symptoms of aconitine poisoning appear approximately 20 minutes to 2 hours after oral intake and include paresthesia, sweating and nausea. This leads to severe vomiting, colicky diarrhea, intense pain and then paralysis of the skeletal muscles. Following the onset of life-threatening arrhythmia, including ventricular tachycardia and ventricular fibrillation, death finally occurs as a result of respiratory paralysis or cardiac arrest. values for mice are 1 mg/kg orally, 0.100 mg/kg intravenously, 0.270 mg/kg intraperitoneally and 0.270 mg/kg subcutaneously. The lowest published lethal dose (LDLo) for mice is 1 mg/kg orally and 0.100 mg/kg intraperitoneally. The lowest published toxic dose (TDLo) for mice is 0.0549 mg/kg subcutaneously. LD50 value for rats is 0.064 mg/kg intravenously. The LDLo for rats is 0.040 mg/kg intravenously and 0.250 mg/kg intraperitoneally. The TDLo for rats is 0.040 mg/kg parenterally. For an overview of more test animal results (LD50, LDLo and TDLo) see the following table. Note that LD50 means lethal dose, 50 percent kill; LDLo means lowest published lethal dose; TDLo means lowest published toxic dose For humans the lowest published oral lethal dose of 28 μg/kg was reported in 1969. Diagnosis and treatment For the analysis of the Aconitum alkaloids in biological specimens such as blood, serum and urine, several GC-MS methods have been described. These employ a variety of extraction procedures followed by derivatisation to their trimethylsilyl derivatives. New sensitive HPLC-MS methods have been developed as well, usually preceded by SPE purification of the sample. The antiarrhythmic drug lidocaine has been reported to be an effective treatment of aconitine poisoning of a patient. Considering the fact that aconitine acts as an agonist of the sodium channel receptor, antiarrhythmic agents which block the sodium channel (Vaughan-Williams' classification I) might be the first choice for the therapy of aconitine induced arrhythmias. Animal experiments have shown that the mortality of aconitine is lowered by tetrodotoxin. The toxic effects of aconitine were attenuated by tetrodotoxin, probably due to their mutual antagonistic effect on excitable membranes. Also paeoniflorin seems to have a detoxifying effect on the acute toxicity of aconitine in test animals. This may result from alternations of pharmacokinetic behavior of aconitine in the animals due to the pharmacokinetic interaction between aconitine and paeoniflorin. In addition, in emergencies, one can wash the stomach using either tannic acid or powdered charcoal. Heart stimulants such as strong coffee or caffeine may also help until professional help is available. Famous poisonings During the Indian Rebellion of 1857, a British detachment was the target of attempted poisoning with aconitine by the Indian regimental cooks. The plot was thwarted by John Nicholson who, having detected the plot, interrupted the British officers just as they were about to consume the poisoned meal. The chefs refused to taste their own preparation, whereupon it was force-fed to a monkey who "expired on the spot". The cooks were hanged. Aconitine was the poison used by George Henry Lamson in 1881 to murder his brother-in-law in order to secure an inheritance. Lamson had learned about aconitine as a medical student from professor Robert Christison, who had taught that it was undetectable—but forensic science had improved since Lamson's student days. Rufus T. Bush, American industrialist and yachtsman, died on September 15, 1890, after accidentally taking a fatal dose of aconite. In 1953 aconitine was used by a Soviet biochemist and poison developer, Grigory Mairanovsky, in experiments with prisoners in the secret NKVD laboratory in Moscow. He admitted killing around 10 people using the poison. In 2004 Canadian actor Andre Noble died from aconitine poisoning. He accidentally ate some monkshood while he was on a hike with his aunt in Newfoundland. In 2009 Lakhvir Singh of Feltham, west London, used aconitine to poison the food of her ex-lover Lakhvinder Cheema (who died as a result of the poisoning) and his current fiancée Gurjeet Choongh. Singh received a life sentence with a 23-year minimum for the murder on February 10, 2010. In 2022, twelve diners at a restaurant in York Region became acutely ill following a meal. All twelve became seriously ill and four of them were admitted to the intensive care unit after the suspected poisoning. In popular culture Aconitine was a favorite poison in the ancient world. The poet Ovid, referring to the proverbial dislike of stepmothers for their step-children, writes: Lurida terribiles miscent aconita novercae. Fearsome stepmothers mix lurid aconites. Aconitine was also made famous by its use in Oscar Wilde's 1891 story "Lord Arthur Savile's Crime". Aconite also plays a prominent role in James Joyce's Ulysses, in which the father to protagonist Leopold Bloom used pastilles of the chemical to commit suicide. Aconitine poisoning plays a key role in the murder mystery Breakdown by Jonathan Kellerman (2016). In Twin Peaks season 3 part 13, aconitine is suggested as a means to poison the main character. Monk's Hood is the name of the third Cadfael novel written in 1980 by Ellis Peters. The novel was made into an episode of the television series Cadfael starring Derek Jacobi. In the third season of the Netflix series You, two of the main characters poison each other with aconitine. One survives (due to a lower dose and an antidote), and the other is killed. Hannah McKay (Yvonne Strahovski), a serial killer in the Showtime series Dexter uses aconite on at least three occasions to poison her victims. In season 2 episode 16 of the series Person Of Interest, aconitine is shown in a syringe stuck to the character Shaw (Sarah Shahi) nearly being injected and causing her death, until she is rescued by Reese (Jim Caviezel). In a 2017 episode of The Doctor Blake Mysteries, fight manager Gus Jansons (Steve Adams) murdered his boxer, Mickey Ellis (Trey Coward), during a match by applying aconitine he had put in petroleum jelly and applying it to a cut over the boxer’s eye. He feared being blackmailed over a murder he helped cover up. He had made the poison from wolfsbane he had seen in a local garden. Aconitine poisoning is used by Villanelle to kill the Ukrainian gangster, Rinat Yevtukh in Killing Eve: No Tomorrow by Luke Jennings (2018). See also Pseudaconitine References External links Diterpene alkaloids Ion channel toxins Non-protein ion channel toxins Neurotoxins Acetate esters Benzoate esters Secondary alcohols Tertiary alcohols Nitrogen heterocycles Sodium channel openers Plant toxins Heterocyclic compounds with 6 rings Methoxy compounds
Aconitine
[ "Chemistry" ]
3,916
[ "Neurochemistry", "Neurotoxins", "Chemical ecology", "Plant toxins" ]
60,012
https://en.wikipedia.org/wiki/Formal%20power%20series
In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.). A formal power series is a special kind of formal series, of the form where the called coefficients, are numbers or, more generally, elements of some ring, and the are formal powers of the symbol that is called an indeterminate or, commonly, a variable. Hence, power series can be viewed as a generalization of polynomials where the number of terms is allowed to be infinite, and differ from usual power series by the absence of convergence requirements, which implies that a power series may not represent a function of its variable. Formal power series are in one to one correspondence with their sequences of coefficients, but the two concepts must not be confused, since the operations that can be applied are different. A formal power series with coefficients in a ring is called a formal power series over The formal power series over a ring form a ring, commonly denoted by (It can be seen as the -adic completion of the polynomial ring in the same way as the -adic integers are the -adic completion of the ring of the integers.) Formal powers series in several indeterminates are defined similarly by replacing the powers of a single indeterminate by monomials in several indeterminates. Formal power series are widely used in combinatorics for representing sequences of integers as generating functions. In this context, a recurrence relation between the elements of a sequence may often be interpreted as a differential equation that the generating function satisfies. This allows using methods of complex analysis for combinatorial problems (see analytic combinatorics). Introduction A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable X denotes any numerical value (not even an unknown value). For example, consider the series If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1 by the Cauchy–Hadamard theorem. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, ... ] as coefficients, even though the corresponding power series diverges for any nonzero value of X. Algebra on formal power series is carried out by simply pretending that the series are polynomials. For example, if then we add A and B term by term: We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product): Notice that each coefficient in the product AB only depends on a finite number of coefficients of A and B. For example, the X5 term is given by For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis. Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series A is a formal power series C such that AC = 1, provided that such a formal power series exists. It turns out that if A has a multiplicative inverse, it is unique, and we denote it by A−1. Now we can define division of formal power series by defining B/A to be the product BA−1, provided that the inverse of A exists. For example, one can use the definition of multiplication above to verify the familiar formula An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator applied to a formal power series in one variable extracts the coefficient of the th power of the variable, so that and . Other examples include Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below. The ring of formal power series If one considers the set of all formal power series in X with coefficients in a commutative ring R, the elements of this set collectively constitute another ring which is written and called the ring of formal power series in the variable X over R. Definition of the formal power series ring One can characterize abstractly as the completion of the polynomial ring equipped with a particular metric. This automatically gives the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe more explicitly, and define the ring structure and topological structure separately, as follows. Ring structure As a set, can be constructed as the set of all infinite sequences of elements of , indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index is by , one defines addition of two such sequences by and multiplication by This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, becomes a commutative ring with zero element and multiplicative identity . The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds into by sending any (constant) to the sequence and designates the sequence by ; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as these are precisely the polynomials in . Given this, it is quite natural and convenient to designate a general sequence by the formal expression , even though the latter is not an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation of the above definitions as and which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition. Topological structure Having stipulated conventionally that one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in is defined and a topology on is constructed. There are several equivalent ways to define the desired topology. We may give the product topology, where each copy of is given the discrete topology. We may give the I-adic topology, where is the ideal generated by , which consists of all sequences whose first term is zero. The desired topology could also be derived from the following metric. The distance between distinct sequences is defined to be where is the smallest natural number such that ; the distance between two equal sequences is of course zero. Informally, two sequences and become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (), regardless of the values , since inclusion of the term for gives the last (and in fact only) change to the coefficient of . It is also obvious that the limit of the sequence of partial sums is equal to the left hand side. This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over and is denoted by . The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of occurs in only finitely many terms. The topological structure allows much more flexible usage of infinite summations. For instance the rule for multiplication can be restated simply as since only finitely many terms on the right affect any fixed . Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1 (in which case the product is nonzero) or infinitely many factors have no constant term (in which case the product is zero). Alternative topologies The above topology is the finest topology for which always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series. In the ring of formal power series , the topology of above construction only relates to the indeterminate , since the topology that was put on has been replaced by the discrete topology when defining the topology of the whole ring. So converges (and its sum can be written as ); however would be considered to be divergent, since every term affects the coefficient of . This asymmetry disappears if the power series ring in is given the product topology where each copy of is given its topology as a ring of formal power series rather than the discrete topology. With this topology, a sequence of elements of converges if the coefficient of each power of converges to a formal power series in , a weaker condition than stabilizing entirely. For instance, with this topology, in the second example given above, the coefficient of converges to , so the whole summation converges to . This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all indeterminates at once. In the above example that would mean constructing and here a sequence converges if and only if the coefficient of every monomial stabilizes. This topology, which is also the -adic topology, where is the ideal generated by and , still enjoys the property that a summation converges if and only if its terms tend to 0. The same principle could be used to make other divergent limits converge. For instance in the limit does not exist, so in particular it does not converge to This is because for the coefficient of does not stabilize as . It does however converge in the usual topology of , and in fact to the coefficient of . Therefore, if one would give the product topology of where the topology of is the usual topology rather than the discrete one, then the above limit would converge to . This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would not be the case that a summation converges if and only if its terms tend to 0. Universal property The ring may be characterized by the following universal property. If is a commutative associative algebra over , if is an ideal of such that the -adic topology on is complete, and if is an element of , then there is a unique with the following properties: is an -algebra homomorphism is continuous . Operations on formal power series One can perform algebraic operations on power series to generate new power series. Besides the ring structure operations defined above, we have the following. Power series raised to powers For any natural number , the th power of a formal power series is defined recursively by If and are invertible in the ring of coefficients, one can prove where In the case of formal power series with complex coefficients, the complex powers are well defined for series with constant term equal to . In this case, can be defined either by composition with the binomial series , or by composition with the exponential and the logarithmic series, or as the solution of the differential equation (in terms of series) with constant term 1; the three definitions are equivalent. The rules of calculus and easily follow. Multiplicative inverse The series is invertible in if and only if its constant coefficient is invertible in . This condition is necessary, for the following reason: if we suppose that has an inverse then the constant term of is the constant term of the identity series, i.e. it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series via the explicit recursive formula An important special case is that the geometric series formula is valid in : If is a field, then a series is invertible if and only if the constant term is non-zero, i.e. if and only if the series is not divisible by . This means that is a discrete valuation ring with uniformizing parameter . Division The computation of a quotient assuming the denominator is invertible (that is, is invertible in the ring of scalars), can be performed as a product and the inverse of , or directly equating the coefficients in : Extracting coefficients The coefficient extraction operator applied to a formal power series in X is written and extracts the coefficient of Xm, so that Composition Given two formal power series such that one may form the composition where the coefficients cn are determined by "expanding out" the powers of f(X): Here the sum is extended over all (k, j) with and with Since one must have and for every This implies that the above sum is finite and that the coefficient is the coefficient of in the polynomial , where and are the polynomials obtained by truncating the series at that is, by removing all terms involving a power of higher than A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0. Composition is only valid when has no constant term, so that each depends on only a finite number of coefficients of and . In other words, the series for converges in the topology of . Example Assume that the ring has characteristic 0 and the nonzero integers are invertible in . If one denotes by the formal power series then the equality makes perfect sense as a formal power series, since the constant coefficient of is zero. Composition inverse Whenever a formal series has f0 = 0 and f1 being an invertible element of R, there exists a series that is the composition inverse of , meaning that composing with gives the series representing the identity function . The coefficients of may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity X (that is 1 at degree 1 and 0 at every degree greater than 1). In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula (discussed below) provides a powerful tool to compute the coefficients of g, as well as the coefficients of the (multiplicative) powers of g. Formal differentiation Given a formal power series we define its formal derivative, denoted Df or f ′, by The symbol D is called the formal differentiation operator. This definition simply mimics term-by-term differentiation of a polynomial. This operation is R-linear: for any a, b in R and any f, g in Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid: and the chain rule works as well: whenever the appropriate compositions of series are defined (see above under composition of series). Thus, in these respects formal power series behave like Taylor series. Indeed, for the f defined above, we find that where Dk denotes the kth formal derivative (that is, the result of formally differentiating k times). Formal antidifferentiation If is a ring with characteristic zero and the nonzero integers are invertible in , then given a formal power series we define its formal antiderivative or formal indefinite integral by for any constant . This operation is R-linear: for any a, b in R and any f, g in Additionally, the formal antiderivative has many of the properties of the usual antiderivative of calculus. For example, the formal antiderivative is the right inverse of the formal derivative: for any . Properties Algebraic properties of the formal power series ring is an associative algebra over which contains the ring of polynomials over ; the polynomials correspond to the sequences which end in zeros. The Jacobson radical of is the ideal generated by and the Jacobson radical of ; this is implied by the element invertibility criterion discussed above. The maximal ideals of all arise from those in in the following manner: an ideal of is maximal if and only if is a maximal ideal of and is generated as an ideal by and . Several algebraic properties of are inherited by : if is a local ring, then so is (with the set of non units the unique maximal ideal), if is Noetherian, then so is (a version of the Hilbert basis theorem), if is an integral domain, then so is , and if is a field, then is a discrete valuation ring. Topological properties of the formal power series ring The metric space is complete. The ring is compact if and only if R is finite. This follows from Tychonoff's theorem and the characterisation of the topology on as a product topology. Weierstrass preparation The ring of formal power series with coefficients in a complete local ring satisfies the Weierstrass preparation theorem. Applications Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions. One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of : Then one can show that The last one being valid in the ring For K a field, the ring is often used as the "standard, most general" complete local ring over K in algebra. Interpreting formal power series as functions In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let and suppose is a commutative associative algebra over , is an ideal in such that the I-adic topology on is complete, and is an element of . Define: This series is guaranteed to converge in given the above assumptions on . Furthermore, we have and Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved. Since the topology on is the -adic topology and is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal ): , and are all well defined for any formal power series With this formalism, we can give an explicit formula for the multiplicative inverse of a power series whose constant coefficient is invertible in : If the formal power series with is given implicitly by the equation where is a known power series with , then the coefficients of can be explicitly computed using the Lagrange inversion formula. Generalizations Formal Laurent series The formal Laurent series over a ring are defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree. That is, they are the series that can be written as for some integer , so that there are only finitely many negative with . (This is different from the classical Laurent series of complex analysis.) For a non-zero formal Laurent series, the minimal integer such that is called the order of and is denoted (The order ord(0) of the zero series is .) Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of of two series with respective sequences of coefficients and is This sum has only finitely many nonzero terms because of the assumed vanishing of coefficients at sufficiently negative indices. The formal Laurent series form the ring of formal Laurent series over , denoted by . It is equal to the localization of the ring of formal power series with respect to the set of positive powers of . If is a field, then is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain . As with , the ring of formal Laurent series may be endowed with the structure of a topological ring by introducing the metric (In particular, implies that One may define formal differentiation for formal Laurent series in the natural (term-by-term) way. Precisely, the formal derivative of the formal Laurent series above is which is again a formal Laurent series. If is a non-constant formal Laurent series and with coefficients in a field of characteristic 0, then one has However, in general this is not the case since the factor for the lowest order term could be equal to 0 in . Formal residue Assume that is a field of characteristic 0. Then the map defined above is a -derivation that satisfies The latter shows that the coefficient of in is of particular interest; it is called formal residue of and denoted . The map is -linear, and by the above observation one has an exact sequence Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any if Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to . Property (iii): any can be written in the form , with and : then implies is invertible in whence Property (iv): Since we can write with . Consequently, and (iv) follows from (i) and (iii). Property (v) is clear from the definition. The Lagrange inversion formula As mentioned above, any formal series with f0 = 0 and f1 ≠ 0 has a composition inverse The following relation between the coefficients of gn and f−k holds (""): In particular, for n = 1 and all k ≥ 1, Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting one residue-based proof here (a number of different proofs exist, using, e.g., Cauchy's coefficient formula for holomorphic functions, tree-counting arguments, or induction). Noting , we can apply the rules of calculus above, crucially Rule (iv) substituting , to get: Generalizations. One may observe that the above computation can be repeated plainly in more general settings than K((X)): a generalization of the Lagrange inversion formula is already available working in the -modules where α is a complex exponent. As a consequence, if f and g are as above, with , we can relate the complex powers of f / X and g / X: precisely, if α and β are non-zero complex numbers with negative integer sum, then For instance, this way one finds the power series for complex powers of the Lambert function. Power series in several variables Formal power series in any number of indeterminates (even infinitely many) can be defined. If I is an index set and XI is the set of indeterminates Xi for i∈I, then a monomial Xα is any finite product of elements of XI (repetitions allowed); a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted . The set of all such formal power series is denoted and it is given a ring structure by defining and Topology The topology on is such that a sequence of its elements converges only if for each monomial Xα the corresponding coefficient stabilizes. If I is finite, then this the J-adic topology, where J is the ideal of generated by all the indeterminates in XI. This does not hold if I is infinite. For example, if then the sequence with does not converge with respect to any J-adic topology on R, but clearly for each monomial the corresponding coefficient stabilizes. As remarked above, the topology on a repeated formal power series ring like is usually chosen in such a way that it becomes isomorphic as a topological ring to Operations All of the operations defined for series in one variable may be extended to the several variables case. A series is invertible if and only if its constant term is invertible in R. The composition f(g(X)) of two series f and g is defined if f is a series in a single indeterminate, and the constant term of g is zero. For a series f in several indeterminates a form of "composition" can similarly be defined, with as many separate series in the place of g as there are indeterminates. In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other. Universal property In the several variables case, the universal property characterizing becomes the following. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x1, ..., xr are elements of I, then there is a unique map with the following properties: Φ is an R-algebra homomorphism Φ is continuous Φ(Xi) = xi for i = 1, ..., r. Non-commuting variables The several variable case can be further generalised by taking non-commuting variables Xi for i ∈ I, where I is an index set and then a monomial Xα is any word in the XI; a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted . The set of all such formal power series is denoted R«XI», and it is given a ring structure by defining addition pointwise and multiplication by where · denotes concatenation of words. These formal power series over R form the Magnus ring over R. On a semiring Given an alphabet and a semiring . The formal power series over supported on the language is denoted by . It consists of all mappings , where is the free monoid generated by the non-empty set . The elements of can be written as formal sums where denotes the value of at the word . The elements are called the coefficients of . For the support of is the set A series where every coefficient is either or is called the characteristic series of its support. The subset of consisting of all series with a finite support is denoted by and called polynomials. For and , the sum is defined by The (Cauchy) product is defined by The Hadamard product is defined by And the products by a scalar and by and , respectively. With these operations and are semirings, where is the empty word in . These formal power series are used to model the behavior of weighted automata, in theoretical computer science, when the coefficients of the series are taken to be the weight of a path with label in the automata. Replacing the index set by an ordered abelian group Suppose is an ordered abelian group, meaning an abelian group with a total ordering respecting the group's addition, so that if and only if for all . Let I be a well-ordered subset of , meaning I contains no infinite descending chain. Consider the set consisting of for all such I, with in a commutative ring , where we assume that for any index set, if all of the are zero then the sum is zero. Then is the ring of formal power series on ; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same. Sometimes the notation is used to denote . Various properties of transfer to . If is a field, then so is . If is an ordered field, we can order by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if is a divisible group and is a real closed field, then is a real closed field, and if is algebraically closed, then so is . This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality. Examples and related topics Bell series are used to study the properties of multiplicative arithmetic functions Formal groups are used to define an abstract group law using formal power series Puiseux series are an extension of formal Laurent series, allowing fractional exponents Rational series See also Ring of restricted power series Notes References Nicolas Bourbaki: Algebra, IV, §4. Springer-Verlag 1988. Further reading W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997, Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. Abstract algebra Ring theory Enumerative combinatorics Mathematical series
Formal power series
[ "Mathematics" ]
6,233
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Ring theory", "Enumerative combinatorics", "Combinatorics", "Fields of abstract algebra", "Abstract algebra", "Algebra" ]
60,020
https://en.wikipedia.org/wiki/Table%20of%20nuclides%20%28segmented%2C%20narrow%29
The isotope tables given below show all of the known isotopes of the chemical elements, arranged with increasing atomic number from left to right and increasing neutron number from top to bottom. Half lives are indicated by the color of each isotope's cell (see color chart in each section). Colored borders indicate half lives of the most stable nuclear isomer states. The data for these tables came from Brookhaven National Laboratory which has an interactive Table of Nuclides with data on ~3000 nuclides. Isotopes for elements 0-14 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 15-29 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 30-44 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 45-59 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 60-74 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 75-89 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 90-104 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table Isotopes for elements 105-118 ← Previous | Next →Go to Unitized table (all elements)Go to Periodic table External links Interactive Chart of Nuclides (Brookhaven National Laboratory) The Lund/LBNL Nuclear Data Search An isotope table with clickable information on every isotope and its decay routes is available at chemlab.pc.maricopa.edu An example of free Universal Nuclide Chart with decay information for over 3000 nuclides is available at Nucleonica.net. The LIVEChart of Nuclides - IAEA Links to other charts of nuclides, including printed posters and journal articles, is available at nds.iaea.org. Tables of nuclides
Table of nuclides (segmented, narrow)
[ "Chemistry" ]
424
[ "Tables of nuclides", "Isotopes" ]
60,022
https://en.wikipedia.org/wiki/Fractal%20compression
Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Iterated function systems Fractal image representation may be described mathematically as an iterated function system (IFS). For binary images We begin with the representation of a binary image, where the image may be thought of as a subset of . An IFS is a set of contraction mappings ƒ1,...,ƒN, According to these mapping functions, the IFS describes a two-dimensional set S as the fixed point of the Hutchinson operator That is, H is an operator mapping sets to sets, and S is the unique set satisfying H(S) = S. The idea is to construct the IFS such that this set S is the input binary image. The set S can be recovered from the IFS by fixed point iteration: for any nonempty compact initial set A0, the iteration Ak+1 = H(Ak) converges to S. The set S is self-similar because H(S) = S implies that S is a union of mapped copies of itself: So we see the IFS is a fractal representation of S. Extension to grayscale IFS representation can be extended to a grayscale image by considering the image's graph as a subset of . For a grayscale image u(x,y), consider the set S = {(x,y,u(x,y))}. Then similar to the binary case, S is described by an IFS using a set of contraction mappings ƒ1,...,ƒN, but in , Encoding A challenging problem of ongoing research in fractal image representation is how to choose the ƒ1,...,ƒN such that its fixed point approximates the input image, and how to do this efficiently. A simple approach for doing so is the following partitioned iterated function system (PIFS): Partition the image domain into range blocks Ri of size s×s. For each Ri, search the image to find a block Di of size 2s×2s that is very similar to Ri. Select the mapping functions such that H(Di) = Ri for each i. In the second step, it is important to find a similar block so that the IFS accurately represents the input image, so a sufficient number of candidate blocks for Di need to be considered. On the other hand, a large search considering many blocks is computationally costly. This bottleneck of searching for similar blocks is why PIFS fractal encoding is much slower than for example DCT and wavelet based image representation. The initial square partitioning and brute-force search algorithm presented by Jacquin provides a starting point for further research and extensions in many possible directions—different ways of partitioning the image into range blocks of various sizes and shapes; fast techniques for quickly finding a close-enough matching domain block for each range block rather than brute-force searching, such as fast motion estimation algorithms; different ways of encoding the mapping from the domain block to the range block; etc. Other researchers attempt to find algorithms to automatically encode an arbitrary image as RIFS (recurrent iterated function systems) or global IFS, rather than PIFS; and algorithms for fractal video compression including motion compensation and three dimensional iterated function systems. Fractal image compression has many similarities to vector quantization image compression. Features With fractal compression, encoding is extremely computationally expensive because of the search used to find the self-similarities. Decoding, however, is quite fast. While this asymmetry has so far made it impractical for real time applications, when video is archived for distribution from disk storage or file downloads fractal compression becomes more competitive. At common compression ratios, up to about 50:1, fractal compression provides similar results to DCT-based algorithms such as JPEG. At high compression ratios fractal compression may offer superior quality. For satellite imagery, ratios of over 170:1 have been achieved with acceptable results. Fractal video compression ratios of 25:1–244:1 have been achieved in reasonable compression times (2.4 to 66 sec/frame). Compression efficiency increases with higher image complexity and color depth, compared to simple grayscale images. Resolution independence and fractal scaling An inherent feature of fractal compression is that images become resolution independent after being converted to fractal code. This is because the iterated function systems in the compressed file scale indefinitely. This indefinite scaling property of a fractal is known as "fractal scaling". Fractal interpolation The resolution independence of a fractal-encoded image can be used to increase the display resolution of an image. This process is also known as "fractal interpolation". In fractal interpolation, an image is encoded into fractal codes via fractal compression, and subsequently decompressed at a higher resolution. The result is an up-sampled image in which iterated function systems have been used as the interpolant. Fractal interpolation maintains geometric detail very well compared to traditional interpolation methods like bilinear interpolation and bicubic interpolation. Since the interpolation cannot reverse Shannon entropy however, it ends up sharpening the image by adding random instead of meaningful detail. One cannot, for example, enlarge an image of a crowd where each person's face is one or two pixels and hope to identify them. History Michael Barnsley led the development of fractal compression from 1985 at the Georgia Institute of Technology (where both Barnsley and Sloan were professors in the mathematics department). The work was sponsored by DARPA and the Georgia Tech Research Corporation. The project resulted in several patents from 1987. Barnsley's graduate student Arnaud Jacquin implemented the first automatic algorithm in software in 1992. All methods are based on the fractal transform using iterated function systems. Michael Barnsley and Alan Sloan formed Iterated Systems Inc. in 1987 which was granted over 20 additional patents related to fractal compression. A major breakthrough for Iterated Systems Inc. was the automatic fractal transform process which eliminated the need for human intervention during compression as was the case in early experimentation with fractal compression technology. In 1992, Iterated Systems Inc. received a US$2.1 million government grant to develop a prototype digital image storage and decompression chip using fractal transform image compression technology. Fractal image compression has been used in a number of commercial applications: onOne Software, developed under license from Iterated Systems Inc., Genuine Fractals 5 which is a Photoshop plugin capable of saving files in compressed FIF (Fractal Image Format). To date the most successful use of still fractal image compression is by Microsoft in its Encarta multimedia encyclopedia, also under license. Iterated Systems Inc. supplied a shareware encoder (Fractal Imager), a stand-alone decoder, a Netscape plug-in decoder and a development package for use under Windows. The redistribution of the "decompressor DLL" provided by the ColorBox III SDK was governed by restrictive per-disk or year-by-year licensing regimes for proprietary software vendors and by a discretionary scheme that entailed the promotion of the Iterated Systems products for certain classes of other users. ClearVideo also known as RealVideo (Fractal) and SoftVideo were early fractal video compression products. ClearFusion was Iterated's freely distributed streaming video plugin for web browsers. In 1994 SoftVideo was licensed to Spectrum Holobyte for use in its CD-ROM games including Falcon Gold and Star Trek: The Next Generation A Final Unity. In 1996, Iterated Systems Inc. announced an alliance with the Mitsubishi Corporation to market ClearVideo to their Japanese customers. The original ClearVideo 1.2 decoder driver is still supported by Microsoft in Windows Media Player although the encoder is no longer supported. Two firms, Total Multimedia Inc. and Dimension, both claim to own or have the exclusive licence to Iterated's video technology, but neither has yet released a working product. The technology basis appears to be Dimension's U.S. patents 8639053 and 8351509, which have been considerably analyzed. In summary, it is a simple quadtree block-copying system with neither the bandwidth efficiency nor PSNR quality of traditional DCT-based codecs. In January 2016, TMMI announced that it was abandoning fractal-based technology altogether. Research papers between 1997 and 2007 discussed possible solutions to improve fractal algorithms and encoding hardware. Implementations A library called Fiasco was created by Ullrich Hafner. In 2001, Fiasco was covered in the Linux Journal. According to the 2000-04 Fiasco manual, Fiasco can be used for video compression. The Netpbm library includes the Fiasco library. Femtosoft developed an implementation of fractal image compression in Object Pascal and Java. See also Iterated function system Image compression Wavelet Notes External links Pulcini and Verrando's Compressor Keith Howell's 1993 M.Sc. dissertation Fractal Image Compression for Spaceborne Transputers My Main Squeeze: Fractal Compression, Nov 1993, Wired. Fractal Basics description at FileFormat.Info Superfractals website devoted to fractals by the inventor of fractal compression Image compression Lossy compression algorithms Fractals Data compression
Fractal compression
[ "Mathematics" ]
2,038
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
60,023
https://en.wikipedia.org/wiki/Airbag
An airbag is a vehicle occupant-restraint system using a bag designed to inflate in milliseconds during a collision and then deflate afterwards. It consists of an airbag cushion, a flexible fabric bag, an inflation module, and an impact sensor. The purpose of the airbag is to provide a vehicle occupant with soft cushioning and restraint during a collision. It can reduce injuries between the flailing occupant and the vehicle's interior. The airbag provides an energy-absorbing surface between the vehicle's occupants and a steering wheel, instrument panel, body pillar, headliner, and windshield. Modern vehicles may contain up to ten airbag modules in various configurations, including driver, passenger, side-curtain, seat-mounted, door-mounted, B and C-pillar mounted side-impact, knee bolster, inflatable seat belt, and pedestrian airbag modules. During a crash, the vehicle's crash sensors provide crucial information to the airbag electronic controller unit (ECU), including collision type, angle, and severity of impact. Using this information, the airbag ECU's crash algorithm determines if the crash event meets the criteria for deployment and triggers various firing circuits to deploy one or more airbag modules within the vehicle. Airbag module deployments are activated through a pyrotechnic process designed to be used once as a supplemental restraint system for the vehicle's seat belt systems. Newer side-impact airbag modules consist of compressed-air cylinders that are triggered in the event of a side-on vehicle impact. The first commercial designs were introduced in passenger automobiles during the 1970s, with limited success and caused some fatalities. Broad commercial adoption of airbags occurred in many markets during the late 1980s and early 1990s. Many modern vehicles now include six or more units. Active vs. passive safety Airbags are considered "passive" restraints and act as a supplement to "active" restraints. Because no action by a vehicle occupant is required to activate or use the airbag, it is considered a "passive" device. This is in contrast to seat belts, which are considered "active" devices because the vehicle occupant must act to enable them. This terminology is not related to active and passive safety, which are, respectively, systems designed to prevent collisions in the first place, and systems designed to minimize the effects of collisions once they occur. In this use, a car Anti-lock braking system qualifies as an active-safety device, while both its seat belts and airbags qualify as passive-safety devices. Terminological confusion can arise from the fact that passive devices and systems—those requiring no input or action by the vehicle occupant—can operate independently in an active manner; an airbag is one such device. Vehicle safety professionals are generally careful in their use of language to avoid this sort of confusion. However, advertising principles sometimes prevent such semantic caution in the consumer marketing of safety features. Further confusing the terminology, the aviation safety community uses the terms "active" and "passive" in the opposite sense from the automotive industry. History Origins The airbag "for the covering of aeroplane and other vehicle parts" traces its origins to a United States patent, submitted in 1919 by two dentists from Birmingham, Arthur Parrott and Harold Round. The patent was approved in 1920. Air-filled bladders were in use as early as 1951. The airbag specifically for automobile use is credited independently to the American John W. Hetrick, who filed for an airbag patent on 5 August 1952, that was granted #2,649,311 by the United States Patent Office on 18 August 1953. German engineer Walter Linderer, who filed German patent #896,312 on 6 October 1951, was issued on 12 November 1953, approximately three months after American John Hetrick. The airbags proposed by Hetrick and Linderer were based on compressed air released by a spring, bumper contact, or by the driver. Later research during the 1960s showed that compressed air could not inflate the mechanical airbags fast enough to ensure maximum safety, leading to the current chemical and electrical airbags. In patent applications, manufacturers sometimes use the term "inflatable occupant restraint systems". Hetrick was an industrial engineer and member of the United States Navy. His airbag design, however, only came about when he combined his experiences working with navy torpedoes with his desire to protect his family on the road. Despite working with the major automobile manufacturers of his time, Hetrick was unable to attract investment. Although airbags are now required in every automobile sold in the United States, Hetrick's 1951 patent filing serves as an example of a "valuable" invention with little economic value to its inventor. Its first commercial use was not implemented until after the patent expired in 1971, at which point the airbag was installed in a few experimental Ford cars. In 1964, a Japanese automobile engineer, Yasuzaburou Kobori (小堀保三郎), started developing an airbag "safety net" system. His design harnessed an explosive to inflate an airbag, for which he was later awarded patents in 14 countries. He died in 1975, before seeing the widespread adoption of airbag systems. In 1967, a breakthrough in developing airbag crash sensors came when Allen K. Breed invented a ball-in-tube mechanism for crash detection. Under his system, an electromechanical sensor with a steel ball attached to a tube by a magnet would inflate an airbag in under 30 milliseconds. A small explosion of sodium azide was used instead of compressed air during inflation for the first time. Breed Corporation then marketed this innovation to Chrysler. A similar "Auto-Ceptor" crash-restraint, developed by the Eaton, Yale & Towne company for Ford, was soon also offered as an automatic safety system in the United States, while the Italian Eaton-Livia company offered a variant with localized air cushions. In the early 1970s, General Motors began offering cars equipped with airbags, initially in government fleet-purchased 1973 Chevrolet Impala sedans. These cars came with a 1974-style Oldsmobile instrument panel and a unique steering wheel that contained the driver-side airbag. Two of these cars were crash tested after 20 years and the airbags deployed perfectly. An early example of the airbag cars survives as of 2009. GM's Oldsmobile Toronado was the first domestic U.S. vehicle to include a passenger airbag in 1973. General Motors marketed its first airbag modules under the "Air Cushion Restraint System" name, or ACRS. The automaker discontinued the option for its 1977 model year, citing a lack of consumer interest. Ford and GM then spent years lobbying against air-bag requirements, claiming that the devices were unfeasible and inappropriate. Chrysler made driver-side airbags standard on 1988 and 1989 models, but airbags did not become widespread in American cars until the early 1990s. As a substitute for seat belts Airbags for passenger cars were introduced in the United States in the 1970s. When seat-belt usage rates in the country were quite low compared to modern-day, Ford built experimental cars with airbags in 1971. Allstate operated a fleet of 200 Mercury Montereys and showed the reliability of airbags as well as their operation in crash testing, which also was promoted by the insurance company in popular magazine advertisements. General Motors followed in 1973 using full-sized Chevrolet vehicles. The early fleet of experimental GM vehicles equipped with airbags experienced seven fatalities, one of which was later suspected to have been caused by the airbag. In 1974, GM made its ACRS system (which consisted of a padded lower dashboard and a passenger-side air bag) available as a regular production option (RPO code AR3) in full-sized Cadillac, Buick and Oldsmobile models. The GM cars from the 1970s equipped with ACRS had a driver-side airbag, and a driver-side knee restraint. The passenger-side airbag protected both front passengers, and unlike most modern systems, integrated a knee and torso cushion while also having a dual-stage deployment dictated by force of the impact. The cars equipped with ACRS had lap belts for all seating positions, but lacked shoulder belts. Shoulder belts were already mandatory in the United States on closed cars without airbags for the driver and outer front passenger, but GM chose to market its airbags as a substitute for shoulder belts. Prices for this option on Cadillac models were US$225 in 1974, $300 in 1975, and $340 in 1976 (US$ in dollars ). The early development of airbags coincided with international interest in automobile safety legislation. Some safety experts advocated a performance-based occupant-protection standard rather than one mandating a particular technical solution (which could rapidly become outdated and prove to not be a cost-effective approach). Less emphasis was placed on other designs as countries successfully mandated seat belt restrictions, however. As a supplemental restraint system Frontal airbag The auto industry and research and regulatory communities have moved away from their initial view of the airbag as a seat-belt replacement, and the bags are now nominally designated as supplemental restraint systems (SRS) or supplemental inflatable restraints. In 1981, Mercedes-Benz introduced the airbag in West Germany as an option on its flagship saloon model, S-Class (W126). In the Mercedes system, the sensors automatically tensioned the seat belts to reduce occupants' motion on impact and then deployed the airbag on impact. This integrated the seat belts and the airbag into a restraint system, rather than the airbag being considered an alternative to the seat belt. In 1987, the Porsche 944 Turbo became the first car to have driver and passenger airbags as standard equipment. The Porsche 944 and 944S had this as an available option. The same year also had the first airbag in a Japanese car, the Honda Legend. In 1988, Chrysler became the first United States automaker to fit a driver-side airbag as standard equipment, which was offered in six different models. The following year, Chrysler became the first US auto manufacturer to offer driver-side airbags in all its new passenger models. Chrysler also began featuring the airbags in advertisements showing how the devices had saved lives that helped the public know the value of them and safety became a selling advantage in the late 1980s. All versions of the Chrysler minivans came with airbags starting for the 1991 model year. In 1993, The Lincoln Motor Company boasted that all vehicles in their model line were equipped with dual airbags, one for the driver's side and another for the passenger's side. The 1993 Jeep Grand Cherokee became the first SUV to offer a driver-side airbag when it was launched in 1992. Driver and passenger airbags became standard equipment in all Dodge Intrepid, Eagle Vision, and Chrysler Concorde sedans ahead of any safety regulations. Early 1993 saw the 4-millionth airbag-equipped Chrysler vehicle roll off the assembly line. In October 1993, the Dodge Ram became the first pickup truck with a standard driver-side airbag. The first known collision between two airbag-equipped automobiles took place on 12 March 1990 in Virginia, USA. A 1989 Chrysler LeBaron crossed the center line and hit another 1989 Chrysler LeBaron in a head-on collision, causing both driver airbags to deploy. The drivers suffered only minor injuries despite extensive damage to the vehicles. The United States Intermodal Surface Transportation Efficiency Act of 1991 required passenger cars and light trucks built after 1 September 1998 to have airbags for the driver and the front passenger. In the United States, NHTSA estimated that airbags had saved over 4,600 lives by 1 September 1999; however, the crash deployment experience of the early 1990s installations indicated that some fatalities and serious injuries were in fact caused by airbags. In 1998, NHTSA initiated new rules for advanced airbags that gave automakers more flexibility in devising effective technological solutions. The revised rules also required improved protection for occupants of different sizes regardless of whether they use seat belts, while minimizing the risk to infants, children, and other occupants caused by airbags. In Europe, airbags were almost unheard of until the early 1990s. By 1991, four manufacturers – BMW, Honda, Mercedes-Benz, and Volvo – offered the airbag on some of their higher-end models, but shortly afterward, airbags became a common feature on more mainstream cars, with Ford and Vauxhall/Opel among the manufacturers to introduce the airbag to its model ranges in 1992. Citroën, Fiat, Nissan, Hyundai, Peugeot, Renault, and Volkswagen followed shortly afterwards. By 1999, finding a new mass-market car without an airbag at least as optional equipment was difficult, and some late 1990s products, such as the Volkswagen Golf Mk4, also featured side airbags. The Peugeot 306 is one example of the European automotive mass-market evolution: starting in early 1993, most of these models did not even offer a driver's airbag as an option, but by 1999, even side airbags were available on several variants. Audi was late to offer airbag systems on a broader scale, since even in the 1994 model year, its popular models did not offer airbags. Instead, the German automaker until then relied solely on its proprietary cable-based procon-ten restraint system. Variable force-deployment front airbags were developed to help minimize injury from the airbag itself. The emergence of the airbag has contributed to a sharp decline in the number of deaths and serious injuries on the roads of Europe since 1990, and by 2010, the number of cars on European roads lacking an airbag represented a very small percentage of cars, mostly the remaining cars dating from the mid-1990s or earlier. Many new cars in Latin America, including the Kia Rio, Kia Picanto, Hyundai Grand i10, Mazda 2, Chevrolet Spark and the Chevrolet Onix, are often sold without airbags, as neither airbags nor automatic braking systems in new cars are compulsory in many Latin American countries. Some require the installation of a minimum of only two airbags in new cars which many in this market have. Shape of airbags The Citroën C4 provided the first "shaped" driver airbag, made possible by this car's unusual fixed-hub steering wheel. In 2019, Honda announced it would introduce a new front passenger airbag technology. Developed by Autoliv and Honda R&D in Ohio, United States, this new airbag design features three inflatable chambers connected across the front by a "noninflatable sail panel." The two outer chambers are larger than the middle chamber. When the airbag deploys, the sail panel cushions the occupant's head from the impact of hitting the airbag, and the three chambers hold the occupant's head in place, like a catcher's mitt. The goal of the tri-chamber airbag is to help "arrest high-speed movement" of the head, thereby reducing the likelihood of concussion injuries in a collision. The first vehicle to come with the tri-chamber airbag installed from the factory was in 2020 (for the 2021 model year) for the Acura TLX. Honda hopes that the new technology will soon make its way to all vehicles. Rear airbag Mercedes began offering rear passengers protection in frontal collisions in September 2020 (for the 2021 model year) for the Mercedes-Benz S-Class (W223). The W223 S-Class is the first car equipped with rear seat airbags that use gas to inflate supporting structures that unfold and extend a bag that fills with ambient air, instead of conventional fully gas-inflated airbags that are widely used in automotive airbag systems. Side airbag Essentially, two types of side airbags are commonly used today - the side-torso airbag and the side-curtain airbag. More recently, center airbags are becoming more common in the European market. Most vehicles equipped with side-curtain airbags also include side-torso airbags. However, some, such as the Chevrolet Cobalt, 2007–09 model Chevrolet Silverado/GMC Sierra, and 2009–12 Dodge Ram do not feature the side-torso airbag. From around 2000, side-impact airbags became commonplace on even low- to mid-range vehicles, such as the smaller-engined versions of the Ford Fiesta and Peugeot 206, and curtain airbags were also becoming regular features on mass-market cars. The Toyota Avensis, launched in 2003, was the first mass-market car to be sold in Europe with nine airbags. Side torso airbag Side-impact airbags or side-torso airbags are a category of airbags usually located in the seat or door panel, and inflate between the seat occupant and the door. These airbags are designed to reduce the risk of injury to the pelvic and lower abdomen regions. Most vehicles are now being equipped with different types of designs, to help reduce injury and ejection from the vehicle in rollover crashes. More recent side-airbag designs include a two-chamber system; a firmer lower chamber for the pelvic region and softer upper chamber for the ribcage. Swedish company Autoliv AB was granted a patent on side-impact airbags, and they were first offered as an option in 1994 on the 1995 Volvo 850, and as standard equipment on all Volvo cars made after 1995. In 1997, Saab introduced the first combined head and torso airbags with the launch of the Saab 9-5. Some cars, such as the 2010 Volkswagen Polo Mk.5 have combined head- and torso-side airbags. These are fitted in the backrest of the front seats and protect the head and the torso. Side tubular or curtain airbag In 1997, the BMW 7 Series and 5 Series were fitted with tubular-shaped head side airbags (inflatable tubular structure), the "Head Protection System (HPS)" as standard equipment. This airbag was designed to offer head protection in side impact collisions and also maintained inflation for up to seven seconds for rollover protection. However, this tubular-shaped airbag design has been quickly replaced by an inflatable 'curtain' airbag. In May 1998, Toyota began offering a side-curtain airbag deploying from the roof on the Progrés. In 1998, the Volvo S80 was given roof-mounted curtain airbags to protect both front and rear passengers. Curtain airbags were then made standard equipment on all new Volvo cars from 2000 except for the first-generation C70, which received an enlarged side-torso airbag that also protects the head of front-seat occupants. The second-generation C70 convertible received the world's first door-mounted, side-curtain airbags that deployed upwards. Curtain airbags have been said to reduce brain injury or fatalities by up to 45% in a side impact with an SUV. These airbags come in various forms (e.g., tubular, curtain, door-mounted) depending on the needs of the application. Many recent SUVs and MPVs have a long inflatable curtain airbag that protects all rows of seats. In many vehicles, the curtain airbags are programmed to deploy during some/all frontal impacts to manage passenger kinetics (e.g. head hitting B-pillar on the rebound), especially in offset crashes such as the IIHS's small overlap crash test. Roll-sensing curtain airbag (RSCA) Roll-sensing curtain airbags are designed to stay inflated for a longer duration of time, cover a larger proportion of the window, and be deployed in a roll-over crash. They offer protection to occupants' heads and help to prevent ejection. SUVs and pickups are more likely to be equipped with RSCAs due to their higher probability of rolling over and often a switch can disable the feature in case the driver wants to take the vehicle off-road. Center airbag In 2009, Toyota developed the first production rear-seat center airbag designed to reduce the severity of secondary injuries to rear passengers in a side collision. This system deploys from the rear center seat first appearing in on the Crown Majesta. In late 2012, General Motors with supplier Takata introduced a front center airbag; it deploys from the driver's seat. Hyundai Motor Group announced its development of a center-side airbag on September 18, 2019, installed inside the driver's seat. Some Volkswagen vehicles in 2022 equipped with center airbags include the ID.3 and the Golf. The Polestar 2 also includes a center airbag. With EuroNCAP updating its testing guidelines in 2020, European and Australian market vehicles increasingly use front-center airbags, rear torso airbags, and rear seat belt pre-tensioners. Knee airbag The second driver-side and separate knee airbag was used in the Kia Sportage SUV and has been standard equipment since then. The airbag is located beneath the steering wheel. The Toyota Caldina introduced the first driver-side SRS knee airbag on the Japanese market in 2002. Toyota Avensis became the first vehicle sold in Europe equipped with a driver's knee airbag. The EuroNCAP reported on the 2003 Avensis, "There has been much effort to protect the driver's knees and legs and a knee airbag worked well." Since then certain models have also included front-passenger knee airbags, which deploy near or over the glove compartment in a crash. Knee airbags are designed to reduce leg injury. The knee airbag has become increasingly common since 2000. Rear curtain airbag In 2008, the new Toyota iQ microcar featured the first production rear-curtain shield airbag to protect the rear occupants' heads in the event of a rear-end impact. Seat cushion airbag Another feature of the Toyota iQ was a seat-cushion airbag in the passenger seat to prevent the pelvis from diving below the lap belt during a frontal impact or submarining. Later Toyota models such as the Yaris added the feature to the driver's seat, as well. Seat-belt airbag The seat-belt airbag is designed to better distribute the forces experienced by a buckled person in a crash using an increased seat belt area. This is done to reduce possible injuries to the rib cage or chest of the belt wearer. 2010: Ford Explorer and 2013 Ford Flex: optional rear seat belt airbags; standard on the 2013 Lincoln MKT 2010: Lexus LFA had seat belt airbags for driver and passenger 2013: Mercedes-Benz S-Class (W222) has rear seat belt bags 2014: Ford Mondeo Mk IV has optional rear seat belt airbags for the two outer seats Cessna Aircraft also introduced seat belt airbags. They are as of 2003 standard on the 172, 182, and 206. Pedestrian airbag Airbag(s) mounted to the exterior of vehicles, so-called "pedestrian airbags", are designed to reduce injuries in the event of a vehicle to a pedestrian collision. When a collision is detected the airbag will deploy and cover hard areas, such as a-pillars and bonnet edges, before they can be struck by the pedestrian. When introduced in 2012 the Volvo V40 included the world's first pedestrian airbag as standard. As a result, the V40 ranked highest (88%) in the EuroNCAP's pedestrian tests. Manufacturers Suppliers of SRS airbags include Autoliv, Daicel, TRW, and JSS (which owns Breed, Key Safety Systems, and Takata). The majority of impact sensors of airbags are manufactured by the Lanka Harness Company. Operation The airbags in the vehicle are controlled by a central airbag control unit (ACU), a specific type of ECU. The ACU monitors several related sensors within the vehicle, including accelerometers, impact sensors, side (door) pressure sensors, wheel speed sensors, gyroscopes, brake pressure sensors, and seat occupancy sensors. Oftentimes, ACUs log this—and other—sensor data in a circular buffer and record it to onboard non-volatile memory, to provide a snapshot of the crash event for investigators. As such, an ACU frequently functions as the vehicle's event data recorder; not all EDRs are ACUs, and not all ACUs include EDR features. An ACU typically includes capacitors within its circuitry, so that the module remains powered and able to deploy the airbags if the vehicle's battery connection to the ACU is severed during a crash. The bag itself and its inflation mechanism are concealed within the steering wheel boss (for the driver), or the dashboard (for the front passenger), behind plastic flaps or doors that are designed to tear open under the force of the bag inflating. Once the requisite threshold has been reached or exceeded, the airbag control unit will trigger the ignition of a gas generator propellant to rapidly inflate a fabric bag. As the vehicle occupant collides with and squeezes the bag, the gas escapes in a controlled manner through small vent holes. The airbag's volume and the size of the vents in the bag are tailored to each vehicle type, to spread out the deceleration of (and thus force experienced by) the occupant over time and over the occupant's body, compared to a seat belt alone. The signals from the various sensors are fed into the airbag control unit, which determines from them the angle of impact, the severity, or the force of the crash, along with other variables. Depending on the result of these calculations, the ACU may also deploy various additional restraint devices, such as seat belt pre-tensioners, and/or airbags (including frontal bags for driver and front passenger, along with seat-mounted side bags, and "curtain" airbags which cover the side glass). Each restraint device is typically activated with one or more pyrotechnic devices, commonly called an initiator or electric match. The electric match, which consists of an electrical conductor wrapped in a combustible material, activates with a current pulse between 1 and 3 amperes in less than 2 milliseconds. When the conductor becomes hot enough, it ignites the combustible material, which initiates the gas generator. In a seat belt pre-tensioner, this hot gas is used to drive a piston that pulls the slack out of the seat belt. In an airbag, the initiator is used to ignite solid propellant inside the airbag inflator. The burning propellant generates inert gas which rapidly inflates the airbag in approximately 20 to 30 milliseconds. An airbag must inflate quickly to be fully inflated by the time the forward-traveling occupant reaches its outer surface. Typically, the decision to deploy an airbag in a frontal crash is made within 15 to 30 milliseconds after the onset of the crash, and both the driver and passenger airbags are fully inflated within approximately 60–80 milliseconds after the first moment of vehicle contact. If an airbag deploys too late or too slowly, the risk of occupant injury from contact with the inflating airbag may increase. Since more distance typically exists between the passenger and the instrument panel, the passenger airbag is larger and requires more gas to fill it. Older airbag systems contained a mixture of sodium azide (NaN3), KNO3, and SiO2. A typical driver-side airbag contains approximately 50–80 g of NaN3, with the larger passenger-side airbag containing about 250 g. Within about 40 milliseconds of impact, all these components react in three separate reactions that produce nitrogen gas. The reactions, in order, are as follows. 2 NaN3 → 2 Na + 3 N2 (g) 10 Na + 2 KNO3 → K2O + 5 Na2O + N2 (g) K2O + Na2O + 2 SiO2 → K2SiO3 + Na2SiO3 The first two reactions create 4 molar equivalents of nitrogen gas, and the third converts the remaining reactants to relatively inert potassium silicate and sodium silicate. The reason that KNO3 is used rather than something like NaNO3 is because it is less hygroscopic. The materials used in this reaction must not be hygroscopic because absorbed moisture can de-sensitize the system and cause the reaction to fail. The particle size of the initial reactants is important to reliable operation. The NaN3 and KNO3 must be between 10 and 20 µm, while the SiO2 must be between 5 and 10 µm. There are ongoing efforts to find alternative compounds so that airbags have less toxic reactants. The reaction of the Sr complex nitrate, (Sr(NH2NHCONHNH2)∙(NO3)2) of carbohydrazide (SrCDH) with various oxidizing agents resultS in the evolution of N2 and CO2 gases. Using KBrO3 as the oxidizing agent resulted in the most vigorous reaction as well as the lowest initial temperature of the reaction. The N2 and CO2 gases evolved made up 99% of all gases evolved. Nearly all the starting materials will not decompose until reaching temperatures of 500 °C or higher, so this could be a viable option as an airbag gas generator. In a patent containing another plausible alternative to NaN3 driven airbags, the gas-generating materials involved the use of guanidine nitrate, 5-aminotetrazole, bitetrazole dihydrate, nitroimidazole, and basic copper nitrate. It was found that these non-azide reagents allowed for a less toxic, lower combustion temperature reaction, and more easily disposable airbag inflation system. Front airbags normally do not protect the occupants during side, rear, or rollover collisions. Since airbags deploy only once and deflate quickly after the initial impact, they will not be beneficial during a subsequent collision. Safety belts help reduce the risk of injury in many types of crashes. They help to properly position occupants to maximize the airbag's benefits and they help restrain occupants during the initial and any following collisions. In vehicles equipped with a rollover sensing system, accelerometers, and gyroscopes are used to sense the onset of a rollover event. If a rollover event is determined to be imminent, side-curtain airbags are deployed to help protect the occupant from contact with the side of the vehicle interior, and also to help prevent occupant ejection as the vehicle rolls over. Triggering conditions Airbags are designed to deploy in frontal and near-frontal collisions more severe than a threshold defined by the regulations governing vehicle construction in whatever particular market the vehicle is intended for: United States regulations require deployment in crashes at least equivalent in deceleration to a barrier collision, or similarly, striking a parked car of similar size across the full front of each vehicle at about twice the speed. International regulations are performance-based, rather than technology-based, so airbag deployment threshold is a function of overall vehicle design. Unlike crash tests into barriers, real-world crashes typically occur at angles other than directly into the front of the vehicle, and the crash forces usually are not evenly distributed across the front of the vehicle. Consequently, the relative speed between a striking and struck vehicle required to deploy the airbag in a real-world crash can be much higher than an equivalent barrier crash. Because airbag sensors measure deceleration, the vehicle speed is not a good indicator of whether an airbag should be deployed. Airbags can deploy due to the vehicle's undercarriage striking a low object protruding above the roadway due to the resulting deceleration. The airbag sensor is a MEMS accelerometer, which is a small integrated circuit with integrated micromechanical elements. The microscopic mechanical element moves in response to rapid deceleration, and this motion causes a change in capacitance, which is detected by the electronics on the chip that then sends a signal to fire the airbag. The most common MEMS accelerometer in use is the ADXL-50 by Analog Devices, but there are other MEMS manufacturers as well. Initial attempts using mercury switches did not work well. Before MEMS, the primary system used to deploy airbags was called a "rolamite". A rolamite is a mechanical device, consisting of a roller suspended within a tensioned band. As a result of the particular geometry and material properties used, the roller is free to translate with little friction or hysteresis. This device was developed at Sandia National Laboratories. Rolamite and similar macro-mechanical devices were used in airbags until the mid-1990s after which they were universally replaced with MEMS. Nearly all airbags are designed to automatically deploy in the event of a vehicle fire when temperatures reach . This safety feature, often termed auto-ignition, helps to ensure that such temperatures do not cause an explosion of the entire airbag module. Today, airbag triggering algorithms are much more complex, being able to adapt the deployment speed to the crash conditions, and prevent unnecessary deployments. The algorithms are considered valuable intellectual property. Experimental algorithms may take into account such factors as the weight of the occupant, the seat location, and seat belt use, as well as even attempt to determine if a baby seat is present. Inflation When the frontal airbags are to deploy, a signal is sent to the inflator unit within the airbag control unit. An igniter starts a rapid chemical reaction generating primarily nitrogen gas (N2) to fill the airbag making it deploy through the module cover. Some airbag technologies use compressed nitrogen or argon gas with a pyrotechnic operated valve ("hybrid gas generator"), while other technologies use various energetic propellants. Although propellants containing the highly toxic sodium azide (NaN3) were common in early inflator designs, little to no toxic sodium azide has been found on used airbags. The azide-containing pyrotechnic gas generators contain a substantial amount of the propellant. The driver-side airbag would contain a canister containing about 50 grams of sodium azide. The passenger side container holds about 200 grams of sodium azide. The alternative propellants may incorporate, for example, a combination of nitroguanidine, phase-stabilized ammonium nitrate (NH4NO3) or another nonmetallic oxidizer, and a nitrogen-rich fuel different from azide (e.g. tetrazoles, triazoles, and their salts). The burn rate modifiers in the mixture may be an alkaline metal nitrate (NO3-) or nitrite (NO2-), dicyanamide or its salts, sodium borohydride (NaBH4), etc. The coolants and slag formers may be e.g. clay, silica, alumina, glass, etc. Other alternatives are e.g. nitrocellulose based propellants (which have high gas yield but bad storage stability, and their oxygen balance requires secondary oxidation of the reaction products to avoid buildup of carbon monoxide), or high-oxygen nitrogen-free organic compounds with inorganic oxidizers (e.g., di or tricarboxylic acids with chlorates (ClO3-) or perchlorates (ClO4-) and eventually metallic oxides; the nitrogen-free formulation avoids formation of toxic nitrogen oxides). From the onset of the crash, the entire deployment and inflation process is about 0.04 seconds. Because vehicles change speed so quickly in a crash, airbags must inflate rapidly to reduce the risk of the occupant hitting the vehicle's interior. Variable-force deployment Advanced airbag technologies are being developed to tailor airbag deployment to the severity of the crash, the size, and posture of the vehicle occupant, belt usage, and how close that person is to the actual airbag. Many of these systems use multi-stage inflators that deploy less forcefully in stages in moderate crashes than in very severe crashes. Occupant sensing devices let the airbag control unit know if someone is occupying a seat adjacent to an airbag, the mass/weight of the person, whether a seat belt or child restraint is being used, and whether the person is forward in the seat and close to the airbag. Based on this information and crash severity information, the airbag is deployed at either a high force level, a less forceful level, or not at all. Adaptive airbag systems may utilize multi-stage airbags to adjust the pressure within the airbag. The greater the pressure within the airbag, the more force the airbag will exert on the occupants as they come in contact with it. These adjustments allow the system to deploy the airbag with a moderate force for most collisions; reserving the maximum force airbag only for the severest of collisions. Additional sensors to determine the location, weight or relative size of the occupants may also be used. Information regarding the occupants and the severity of the crash are used by the airbag control unit, to determine whether airbags should be suppressed or deployed, and if so, at various output levels. Post-deployment A chemical reaction produces a burst of nitrogen to inflate the bag. Once an airbag deploys, deflation begins immediately as the gas escapes through vent(s) in the fabric (or, as it is sometimes called, the cushion) and cools. Deployment is frequently accompanied by the release of dust-like particles, and gases in the vehicle's interior (called effluent). Most of this dust consists of cornstarch, french chalk, or talcum powder, which are used to lubricate the airbag during deployment. Newer designs produce effluent primarily consisting of harmless talcum powder/cornstarch and nitrogen gas. In older designs using an azide-based propellant (usually NaN3), varying amounts of sodium hydroxide nearly always are initially present. In small amounts this chemical can cause minor irritation to the eyes and/or open wounds; however, with exposure to air, it quickly turns into sodium bicarbonate (baking soda). However, this transformation is not 100% complete, and invariably leaves residual amounts of hydroxide ions from NaOH. Depending on the type of airbag system, potassium chloride may also be present. For most people, the only effect the dust may produce is some minor irritation of the throat and eyes. Generally, minor irritations only occur when the occupant remains in the vehicle for many minutes with the windows closed and no ventilation. However, some people with asthma may develop a potentially lethal asthmatic attack from inhaling the dust. Because of the airbag exit flap design of the steering wheel boss and dashboard panel, these items are not designed to be recoverable if an airbag deploys, meaning that they have to be replaced if the vehicle has not been written off in a collision. Moreover, the dust-like particles and gases can cause irreparable cosmetic damage to the dashboard and upholstery, meaning that minor collisions that result in the deployment of airbags can be costly, even if there are no injuries and there is only minor damage to the vehicle structure. Regulatory specifications United States On 11 July 1984, the United States government amended Federal Motor Vehicle Safety Standard 208 (FMVSS 208) to require cars produced after 1 April 1989 to be equipped with a passive restraint for the driver. An airbag or an automatic seat belt would meet the requirements of the standard. Airbag introduction was stimulated by the National Highway Traffic Safety Administration. However, airbags were not mandatory on light trucks until 1997. In 1998, FMVSS 208 was amended to require dual front airbags, and reduced-power, second-generation airbags were also mandated. This was due to the injuries caused by first-generation airbags, though FMVSS 208 continues to require that bags be engineered and calibrated to be able to "save" the life of an unbelted 50th-percentile size and weight "male" crash test dummy. The technical performance and validation requirements for the inflator assembly used in airbag modules are specified in SAE USCAR 24–2. Europe Some countries outside North America adhere to internationalized European ECE vehicle and equipment regulations rather than the United States Federal Motor Vehicle Safety Standards. ECE airbags are generally smaller and inflate less forcefully than United States airbags because the ECE specifications are based on belted crash test dummies. The Euro NCAP vehicle safety rating encourages manufacturers to take a comprehensive approach to occupant safety; a good rating can only be achieved by combining airbags with other safety features. Almost every new car sold in Europe is equipped with front and side airbags, but in the European Union in 2020 and in the United Kingdom, and most other developed countries there is no direct legal requirement for new cars to feature airbags. Central and South America Ecuador requires dual front airbags in new car models since 2013. Since January 2014, except for micro vehicles, all new cars made or imported in Argentina must have front airbags. Since 1 January 2014, all new cars sold in Brazil must have dual front airbags. Since July 2014, all new cars sold in Uruguay must have dual front airbags. Since December 2016, all new cars sold in Chile must have dual front airbags. Since 1 January 2017, all cars made or imported in Colombia must have dual front airbags. Since 1 January 2020, all new cars sold in Mexico must have dual front airbags. India On 5 March 2021, the Indian Ministry of Road Transport and Highways mandated that all new vehicle models introduced in India after 1 April 2021 have dual front airbags; the regulation also requires that all existing models be equipped with dual front airbags by 31 August 2021. India also mandated that all passenger vehicles sold after October 2023 must have a minimum of six airbags. Maintenance Inadvertent airbag deployment while the vehicle is being serviced can result in severe injury, and an improperly installed or defective airbag unit may not operate or perform as intended. Those servicing a vehicle, as well as first responders, must exercise extreme awareness, as many airbag control systems may remain powered for roughly 30 minutes after disconnecting the vehicle's battery. Some countries impose restrictions on the sale, transport, handling, and service of airbags and system components. In Germany, airbags are regulated as harmful explosives; only mechanics with special training are allowed to service airbag systems. Some automakers (such as Mercedes-Benz) call for the replacement of undeployed airbags after a certain time to ensure their reliability in a collision. One example is the 1992 S500, which has an expiry date sticker attached to the door pillar. Some Škoda vehicles indicate an expiry date of 14 years from the date of manufacture. In this case, replacement would be uneconomic as the car would have negligible value at 14 years old, far less than the cost of fitting new airbags. Volvo has stated that "airbags do not require replacement during the lifetime of the vehicle," though this cannot be taken as a guarantee on the device. Limitations Although the millions of installed airbags in use have an excellent safety record, some limitations on their ability to protect car occupants exist. The original implementation of front airbags did little to protect against side collisions, which can be more dangerous than frontal collisions because the protective crumple zone in front of the passenger compartment is completely bypassed. Side airbags and protective airbag curtains are increasingly being required in modern vehicles to protect against this very common category of collisions. Airbags are designed to deploy once only, so are ineffective if any further collisions occur after an initial impact. Multiple impacts may occur during rollovers or other incidents involving multiple collisions, such as many multivehicle collisions. An extremely dangerous situation occurs during "underride collisions", in which a passenger vehicle collides with the rear of a tractor-trailer lacking a rear underride guard, or hits the side of such a trailer not equipped with a side underride guard. The platform bed of a typical trailer is roughly at the head height of a seated adult occupant of a typical passenger car. This means not much of a barrier exists between a head and the edge of the trailer platform, except a glass windshield. In an underride collision, the car's crush zones designed to absorb collision energy are completely bypassed, and the airbags may not deploy in time because the car does not decelerate appreciably until the windshield and roof pillars have already impacted the trailer bed. Even delayed inflation of airbags may be useless because of major intrusion into the passenger space, leaving occupants at high risk of major head trauma or decapitation in even low-speed collisions. Western European standards for underride guards have been stricter than North American standards, which typically have allowed grandfathering of older equipment that may still be on the road for decades. Typical airbag systems are completely disabled by turning off the ignition key. Unexpected turnoffs usually also disable the engine, power steering, and power brakes, and can be the direct cause of a collision. If a violent collision occurs, the disabled airbags will not deploy to protect vehicle occupants. In 2014, General Motors admitted to concealing information about fatal collisions caused by defective ignition switches that would abruptly shut down a car (including its airbags). Between 13 and 74 deaths have been directly attributed to this defect, depending on how the fatalities are classified. Injuries and fatalities Under some rare conditions, airbags can injure and in some very rare instances kill vehicle occupants. To provide crash protection for occupants not wearing seat belts, United States airbag designs trigger much more forcefully than airbags designed to the international ECE standards used in most other countries. Recent "smart" airbag controllers can recognize if a seat belt is used, and alter the airbag cushion deployment parameters accordingly. In 1990, the first automotive fatality attributed to an airbag was reported. TRW produced the first gas-inflated airbag in 1994, with sensors and low inflation-force bags becoming common soon afterward. Dual-depth (also known as dual-stage) airbags appeared on passenger cars in 1998. By 2005, deaths related to airbags had declined, with no adult deaths and two child deaths attributed to airbags that year. However, injuries remain fairly common in collisions with airbag deployment. Serious injuries are less common, but severe or fatal injuries can occur to vehicle occupants very near an airbag or in direct contact when it deploys. Such injuries may be sustained by unconscious drivers slumped over the steering wheel, unrestrained or improperly restrained occupants who slide forward in the seat during precrash braking, and properly belted drivers sitting very close to the steering wheel. A good reason for the driver not to cross hands over the steering wheel, a rule taught to most learner drivers, but quickly forgotten by most, is that an airbag deployment while negotiating a turn may result in the driver's hand(s) being driven forcefully into his or her face, exacerbating any injuries from the airbag alone. Improvements in sensing and gas-generator technology have allowed the development of third-generation airbag systems that can adjust their deployment parameters to the size, weight, position, and restraint status of the occupant. These improvements have demonstrated a reduced injury risk factor for small adults and children, who had an increased risk of injury with first-generation airbag systems. One model of airbags made by the Takata Corporation used ammonium nitrate–based gas-generating compositions in airbag inflators instead of the more stable, but more expensive compound tetrazole. The ammonium nitrate-based inflators have a flaw where old inflators with long-term exposure to hot and humid climate conditions could rupture during deployment, projecting metal shards through the airbag and into the driver. As of December 2022, the defect has caused 33 deaths worldwide, with up to 24 in the U.S. and the remaining in Australia and Malaysia. The National Highway Traffic Safety Administration (NHTSA) recalled over 33 million vehicles in May 2015, and fined Takata $70 million in November 2015. Toyota, Mazda, and Honda have said that they will not use ammonium-nitrate inflators. In June 2017, Takata filed for bankruptcy. Airbag fatality statistics From 1990 to 2000, the United States NHTSA identified 175 fatalities caused by airbags. Most of these (104) have been children. About 3.3 million airbag deployments have occurred during that interval, and the agency estimates more than 6,377 lives were saved and countless injuries were prevented. A rear-facing infant restraint put in the front seat of a vehicle places an infant's head close to the airbag, which can cause severe head injuries or death if the airbag deploys. Some modern cars include a switch to disable the front-passenger airbag, in case a child-supporting seat is used there (although not in Australia, where rear-facing child seats are prohibited in the front where an airbag is fitted). In vehicles with side airbags, it is dangerous for occupants to lean against the windows, doors, and pillars, or to place objects between themselves and the side of the vehicle. Articles hung from a vehicle's clothes hanger hooks can be hazardous if the vehicle's side-curtain airbags are deployed. A seat-mounted airbag may also cause internal injury if the occupant leans against the door. Aerospace and military applications The aerospace industry and the United States government have applied airbag technologies for many years. NASA and United States Department of Defense have incorporated airbag systems in various aircraft and spacecraft applications as early as the 1960s. Spacecraft airbag landing systems The first use of airbags for landing were Luna 9 and Luna 13. As with later missions, these would use the airbags to bounce along the surface, absorbing landing energy. The Mars Pathfinder lander employed an innovative airbag landing system, supplemented with aerobraking, parachute, and solid rocket landing thrusters. This prototype successfully tested the concept, and the two Mars Exploration Rover Mission landers employed similar landing systems. The Beagle 2 Mars lander also tried to use airbags for landing; the landing was successful, and the lander touched down safely, but several of the spacecraft's solar panels failed to deploy, thereby disabling the spacecraft. The Boeing Starliner uses six airbags to cushion the ground landing of the crewed capsule. Aircraft airbag landing systems Airbags have also been used on military fixed-wing aircraft, such as the escape crew capsule of the F-111 Aardvark. Occupant protection The United States Army has incorporated airbags in its UH-60A/L Black Hawk and OH-58D Kiowa Warrior helicopter fleets. The Cockpit Air Bag System (CABS) consists of forward and lateral airbags, and an inflatable tubular structure (on the OH-58D only) with an Electronic Crash Sensor Unit (ECSU). The CABS system was developed by the United States Army Aviation Applied Technology Directorate, through a contract with Simula Safety Systems (now BAE Systems). It is the first conventional airbag system for occupant injury prevention (worldwide) designed and developed and placed in service for an aircraft, and the first specifically for helicopter applications. Other uses In the mid-1970s, the UK Transport Research Laboratory tested several types of motorcycle airbags. In 2006 Honda introduced the first production motorcycle airbag safety system on its Gold Wing motorcycle. Honda claims that sensors in the front forks can detect a severe frontal collision and decide when to deploy the airbag, absorbing some of the forward energy of the rider and reducing the velocity at which the rider may be thrown from the motorcycle. More commonly, air bag vests – either integrated into the motorcyclist's jacket or worn over it – have started to become used by regular riders on the street. MotoGP has made it compulsory since 2018 for riders to wear suits with integrated airbags. Similarly, companies such as Helite and Hit-Air have commercialized equestrian airbags, which attach to the saddle and are worn by the rider. Other sports, particularly skiing and snowboarding, have started introducing airbag safety mechanisms. See also Airbag dermatitis Airplane airbags List of auto parts Precrash system Safety standards References External links Chemistry behind airbags Pictures and details about the 1970s GM Air Cushion Restraint System History of airbags and how the technology gave rise to today's "smart cars". 1973 introductions 20th-century inventions Aircraft emergency systems American inventions Bags Vehicle parts Vehicle safety technologies
Airbag
[ "Technology" ]
10,966
[ "Vehicle parts", "Components" ]
60,059
https://en.wikipedia.org/wiki/Psilocybe%20cubensis
Psilocybe cubensis, commonly known as the magic mushroom, shroom, golden halo, golden teacher, cube, or gold cap, is a species of psilocybin mushroom of moderate potency whose principal active compounds are psilocybin and psilocin. It belongs to the fungus family Hymenogastraceae and was previously known as Stropharia cubensis. It is the best-known psilocybin mushroom due to its wide distribution and ease of cultivation. Taxonomy The species was first described in 1906 as Stropharia cubensis by American mycologist Franklin Sumner Earle in Cuba. In 1907, it was identified as Naematoloma caerulescens in Tonkin (now Vietnam) by French pharmacist and mycologist Narcisse Théophile Patouillard, while in 1941, it was called Stropharia cyanescens by William Alphonso Murrill near Gainesville in Florida. German-born mycologist Rolf Singer moved the species into the genus Psilocybe in 1949, giving it the binomial name Psilocybe cubensis. The synonyms were later also assigned to the species Psilocybe cubensis. The name Psilocybe is derived from the Ancient Greek roots psilos (ψιλος) and kubê (κυβη), and translates as "bare head". Cubensis means "coming from Cuba", and refers to the type locality published by Earle. Singer divided P. cubensis into three varieties: the nominate, which usually had a brownish cap, Murrill's cyanescens from Florida, which generally had a pale cap, and var caerulascens from Indochina with a more yellowish cap. Psilocybe cubensis is commonly known as gold top, golden top or gold cap in Australia, sacred mushroom or blue mushroom in Brazil, and San Ysidro or Palenque mushroom in the United States and Mexico, while the term "magic mushroom" has been applied to hallucinogenic mushrooms in general. It is commonly known as "Golden teacher" in South Africa. A common name in Thai is "Hed keequai", which translates as "mushroom which appears after water buffalo defecates". Description The cap is , conic to convex with a central papilla when young, becoming broadly convex to plane with age, retaining a slight umbo sometimes surrounded by a ring-shaped depression. The cap surface is smooth and sticky, sometimes with white universal veil remnants attached. The cap is brown becoming paler to almost white at the margin and fades to more golden-brown or yellowish with age. When bruised, all parts of the mushroom stain blue. The narrow grey gills are adnate to adnexed, sometimes seceding attachment, and darken to purplish-black and somewhat mottled with age. The gill edges remain whitish. The hollow white stipe is high by thick, becoming yellowish in age. The well-developed veil leaves a persistent white membranous ring whose surface usually becomes the same color as the gills because of falling spores. The fruiting bodies are 90% water. The mushroom has no odor and has been described as tasting farinaceous, with an alkaline or metallic aftertaste. The spores are 11.5–17.3 x 8–11.5 μm, sub-ellipsoid, basidia 4-spored but sometimes 2- or 3-, pleurocystidia and cheilocystidia present. The related species Psilocybe subcubensis—found in tropical regions—is indistinguishable but has smaller spores. Distribution and habitat Psilocybe cubensis is a pan-tropical species, occurring in the Gulf Coast states and southeastern United States, Mexico, in the Central American countries of Belize, Costa Rica, Panamá, El Salvador and Guatemala, the Caribbean countries Cuba, the Dominican Republic, Guadalupe, Martinique, and Trinidad, in the South American countries of Argentina, Bolivia, Brazil, Colombia, French Guiana, Paraguay, Uruguay and Peru, Southeast Asia, including Thailand, Vietnam, Cambodia and Malaysia, India, Australia, Fiji, and possibly Nepal and Hawaii. Psilocybe cubensis is found on cow (and occasionally horse) dung, sugar cane mulch or rich pasture soil, with mushrooms appearing from February to December in the northern hemisphere, and November to April in the southern hemisphere. In Asia, the species grows on water buffalo dung. Along with other fungi that grow on cow dung, P. cubensis is thought to have colonized Australia with the introduction of cattle there, 1800 of which were on the Australian mainland by 1803—having been transported there from the Cape of Good Hope, Kolkata and the American west coast. In Australia, the species grows between northern Queensland to southern New South Wales. In March 2018, several Psilocybe cubensis specimens were collected in Zimbabwe in the Wedza District of Mashonaland East province, approx. 120 km  southeast of Harare. This was the first reported occurrence of a psilocybin mushroom in Zimbabwe. The mushrooms were collected on Imire Rhino & Wildlife Conservation - a nature reserve that is home to both wildlife and cattle, as well as cattle egrets. Relationship with cattle Because Psilocybe cubensis is intimately associated with cattle ranching, the fungus has found unique dispersal niches not available to most other members of the family Hymenogastraceae. Of particular interest is the cattle egret (Bubulcus ibis), a colonizer of Old World origin (via South America), whose range of distribution overlaps much of that of Psilocybe cubensis. Cattle egrets typically walk alongside cattle, preying on insects; they track through spore-laden vegetation and cow dung and transfer the spores to suitable habitats, often thousands of miles away during migration activities. This type of spore dispersal is known as zoochory, and it enables a parent species to propagate over a much greater range than it could achieve alone. The relationship between cattle, cattle egrets, and Psilocybe cubensis is an example of symbiosis—a situation in which dissimilar organisms live together in close association. Cultivation Psilocybe cubensis grows naturally in tropical and subtropical conditions, often near cattle due to the ideal conditions they provide for the growth of the fungus. The cow usually consumes grains or grass covered with the spores of Psilocybe cubensis and the fungus will begin to germinate within the dung. Mushrooms such as Psilocybe cubensis are relatively easy to cultivate indoors. First, spores are inoculated within sterilized jars or bags, colloquially known as grainspawn, containing a form of carbohydrate nutrient such as rye grains. After approximately one month, the spores fully colonize the grain spawn forming dense mycelium, which is then planted within a substrate such as a coconut husk fiber and vermiculite mixture. Given proper humidity, temperature, and fresh air exchange, the substrate will produce fruiting Psilocybe cubensis bodies within a month of planting. To preserve potency after harvesting, growers often dehydrate the fruit and store them in air-tight containers in cool environments. A study conducted in 2009 showed that mushrooms grown in the dark had higher levels of psilocybin and psilocin compared to the mushrooms grown in bright, indirect light, which had minimum levels. Studies were conducted where an environmentally controlled wind tunnel and a computer program were used to determine the influence of humidity on the individual basidiocarps of Psilocybe cubensis which aided in mapping their growth and development. The transpiration and growth of the mushroom were heavily influenced by the humidity of the air, and the transpiration was accelerated at higher humidities while light did not affect the growth. Faster growth was observed at higher humidities. It was also discovered that misting enhanced both the growth and transpiration rates in the growing process of Psilocybe cubensis. Small-scale cultivation of Psilocybe cubensis is often accomplished with "cakes" that colonize within jars, but fruit inside specially designed tubs called "shotgun fruiting chambers". The most common cake method for beginners is PF-Tek ("Psilocybe Fanaticus technique"), named after Psylocybe Fanaticus, the clandestine cultivator credited for its creation. Cakes are popular for the new cultivator because of their fool-proof inoculation methods and low cost of startup materials. As cakes are composed of brown rice flour, vermiculite, and gypsum, they can be steam-sterilized in a large pot. Unlike cereal grains used in bulk growing, brown rice flour contains no bacterial endospores, a contamination vector requiring a pressure cooker to sterilize. Cultivation methods resulting in larger yields are categorized as "bulk growing." Bulk growing allows cultivators to operate on a larger scale, but require a greater investment of time, money, and knowledge. While small-scale grows utilize spore syringes to inject spore solution into cakes, bulk methods instead use grain spawn as primary nutrition for the subsequent growth. Additionally, cultivators must develop solid sterile technique in working with agar. Instead of inoculating grain with spores, growers instead germinate spores on agar plates, then transfer the resultant healthy mycelium to the grain jars. Once the grain is colonized with clean mycelial growth, users inoculate their bulk substrates with the grain in a process known as "spawning." Bulk substrates are frequently a mix of coir, vermiculite and gypsum due to not requiring pasteurization or sterilization. However, some utilize blends of manure-based substrates or straw; substrates which always require pasteurization with open-air spawning. After spawning, the healthy mycelium will colonize the bulk substrate, and given proper conditions, eventually fruit mushrooms. Terence and Dennis McKenna made Psilocybe cubensis particularly famous when they published Psilocybin: Magic Mushroom Grower's Guide in the 1970s upon their return from the Amazon rainforest, having deduced new methods (based on pre-existing techniques originally described by J.P. San Antonio) for growing psilocybin mushrooms and assuring their audience that Psilocybe cubensis were amongst the easiest psilocybin-containing mushrooms to cultivate. The potency of cultivated specimens can vary widely per each flush (harvest). In a classic paper published by Jeremy Bigwood and M.W. Beug, it was shown that with each flush, psilocybin levels varied somewhat unpredictably but were much the same on the first flush as they were on the last flush; however, psilocin was typically absent in the first two flushes but peaked by the fourth flush, making it the most potent. Two strains were also analyzed to determine potency in caps and stems: In one strain the caps contained generally twice as much psilocybin as the stems, but the small amount of psilocin present was entirely in the stems. In the other strain, a trace of psilocin was present in the cap but not in the stem; the cap and stem contained equal amounts of psilocybin. The study concluded that the levels of psilocybin and psilocin vary by over a factor of four in cultures of Psilocybe cubensis grown under controlled conditions. Psychedelic and entheogenic use Singer noted in 1949 that Psilocybe cubensis had psychoactive properties. In Australia, the use of psychoactive mushrooms grew rapidly between 1969 and 1975. In a 1992 paper, locals and tourists in Thailand were reported to consume P. cubensis and related species in mushroom omelets—particularly in Ko Samui and Ko Pha-ngan. At times, omelets were adulterated with LSD, resulting in prolonged intoxication. A thriving subculture had developed in the region. Other localities, such as Hat Yai, Ko Samet, and Chiang Mai, also had some reported usage. In 1996, jars of honey containing Psilocybe cubensis were confiscated at the Dutch-German border. Upon examination, it was revealed that jars of honey containing psychedelic mushrooms were being sold at Dutch coffee shops. P. cubensis is probably the most widely known of the psilocybin-containing mushrooms used for triggering psychedelic experiences after ingestion. Its major psychoactive compounds are: Psilocybin (4-phosphoryloxy-N,N-dimethyltryptamine) Psilocin (4-hydroxy-N,N-dimethyltryptamine) Baeocystin (4-phosphoryloxy-N-methyltryptamine) Norbaeocystin (4-phosphoryloxytryptamine) Aeruginascin (N,N,N-trimethyl-4-phosphoryloxytryptamine) The concentrations of psilocin and psilocybin, as determined by high-performance liquid chromatography, are in the range of 0.14–0.42% (wet weight) and 0.37–1.30% (dry weight) in the whole mushroom 0.17–0.78% (wet weight) and 0.44–1.35% (dry weight) in the cap, and 0.09%–0.30% (wet weight) and 0.05–1.27% (dry weight) in the stem, respectively. For quickly and practically measuring the psychoactive contents of most healthy Psilocybe cubensis varieties, it can generally be assumed that there is approximately 15 mg (+/- 5 mg) of psilocybin per gram of dried mushroom. Furthermore, due to factors such as age and storage method, the psilocybin and psilocin content of a given sample of mushrooms will vary. Individual body composition, brain chemistry and psychological predisposition play a significant role in determining appropriate doses. For a modest psychedelic effect, a minimum of one gram of dried Psilocybe cubensis mushrooms is ingested orally, 0.25–1 gram is usually sufficient to produce a mild effect, 1–2.5 grams usually provides a moderate effect and 2.5 grams and higher usually produces strong effects. For most people, 3.5 dried grams (1/8 oz) would be considered a high dose and may produce an intense experience; this is, however, typically considered a standard dose among recreational users. Body composition (usually weight) should be taken into account when calculating dosage. For many individuals, doses above three grams may be overwhelming. For a few rare people, doses as small as 0.25 gram can produce full-blown effects normally associated with very high doses. For most people, however, that dose level would have virtually no effects. There are many different ways to ingest Psilocybe cubensis. Users may prefer to take them raw, freshly harvested, or dried and preserved. It is also possible to prepare culinary dishes such as pasta or tea with the mushrooms. However, the psychoactive compounds begin to break down rapidly at temperatures exceeding 100 °C (212 °F). Another method of ingestion known as "Lemon Tekking" involves combining pulverized Psilocybe cubensis with a concentrated citrus juice with a pH of ~2. Many users believe that a considerable amount of the psilocybin will have been dephosphorylated into psilocin, the psychoactive metabolite, by citric acid. However, this claim is not substantiated by the literature on the metabolism of psilocybin, as dephosphorylation is known to be mediated by the enzyme alkaline phosphatase in humans. It is therefore more likely that citric acid mostly helps in breakdown of mushroom cells, aiding in digestion and psilocybin release. The "Lemon Tekk" method of consumption results in a more rapid onset and can offer easier digestion or reduced "come-up pressure" associated with raw consumption. Psilocybe cubensis can also be taken in conjunction with other botanicals such as turmeric, ginger, and black pepper. A 2019 study observed turmeric to act as a mild MAOI, which, when combined with psilocin, potentiates the biochemical interactions between serotonin receptors and psilocin, creating an entourage effect. Upon ingestion, effects usually begin after approximately 20–60 minutes (depending on the method of ingestion and stomach contents) and may last from four to ten hours, depending on dosage and individual biochemistry. Visual distortions often occur, including walls that seem to breathe, a vivid enhancement of colors, and the animation of organic shapes. The effects of high doses can be overwhelming depending on the particular phenotype of cubensis, grow method, and the individual. It is recommended not to eat wild mushrooms without properly identifying them as they may be poisonous. In particular, similar species include mushrooms of the genus Galerina and Pholiotina rugosa—all potentially deadly—and Chlorophyllum molybdites. All of these grow in pastures, a similar habitat to that preferred by P. cubensis. In 2019, a 15-year-old boy suffered from transient kidney failure after eating P. cubensis from a cultivation kit in Canada. No one else in the group suffered any ill effects. Legality Psilocybin and psilocin are listed as Schedule I drugs under the United Nations 1971 Convention on Psychotropic Substances. However, mushrooms containing psilocybin and psilocin are not illegal in some parts of the world. For example, in Brazil they are legal, but extractions from the mushroom containing psilocybin and psilocin remain illegal. In the United States, growing or possessing Psilocybe cubensis mushrooms is illegal in all states, but it is legal to possess and buy the spores for microscopy purposes. However, as of May 8, 2019 Denver, Colorado has decriminalized it for those 21 and up. On June 4, 2019, Oakland, California followed suit, decriminalizing psilocybin-containing mushrooms as well as the Peyote cactus. On January 29, 2020, Santa Cruz, California decriminalized naturally-occurring psychedelics, including psilocybin mushrooms. On November 3, 2020, the state of Oregon decriminalized possession of psilocybin mushrooms for recreational use and granted licensed practitioners permission to administer psilocybin mushrooms to individuals age 21 years and older. In 1978, the Florida Supreme Court ruled in Fiske vs Florida that possession of psilocybin mushrooms is not illegal, in that the mushrooms cannot be considered a "container" for psilocybin based on how the law is written, i.e., it does not specifically state that psilocybin mushrooms themselves are illegal, but that the hallucinogenic constituents in them are. According to this decision, the applicable statute as framed imparts no information as to which plants may contain psilocybin in its natural state and does not advise a person of ordinary intelligence that this substance is contained in a particular variety of mushroom. The statute, therefore, can not constitutionally be applied to the appellant. The production, sale and possession of magic mushrooms is illegal in Canada. See also List of psilocybin mushrooms List of psychoactive plants, fungi, and animals Botanical identity of soma-haoma Psilocybin decriminalization in the United States References Further reading Guzman, G. The Genus Psilocybe: A Systematic Revision of the Known Species Including the History, Distribution and Chemistry of the Hallucinogenic Species. Beihefte zur Nova Hedwigia Heft 74. J. Cramer, Vaduz, Germany (1983) [now out of print]. Guzman, G. "Supplement to the genus Psilocybe." Bibliotheca Mycologica 159: 91-141 (1995). Haze, Virginia & Mandrake, K. The Psilocybin Mushroom Bible: The Definitive Guide to Growing and Using Magic Mushrooms. Green Candy Press: Toronto, Canada, 2016. . External links The Ones That Stain Blue Studies in ethnomycology including the contributions of Maria Sabina, Dr. Albert Hofmann and Dr. Gaston Guzman. Psilocybe cubensis drawings and information Erowid Psilocybin Mushroom Vault Mushroom John's Tale of the Shrooms: Psilocybe cubensis Entheogens Fungi described in 1906 Psychoactive fungi cubensis Psychedelic tryptamine carriers Soma (drink) Fungus species
Psilocybe cubensis
[ "Biology" ]
4,318
[ "Fungi", "Fungus species" ]
60,075
https://en.wikipedia.org/wiki/Unknot
In the mathematical theory of knots, the unknot, not knot, or trivial knot, is the least knotted of all knots. Intuitively, the unknot is a closed loop of rope without a knot tied into it, unknotted. To a knot theorist, an unknot is any embedded topological circle in the 3-sphere that is ambient isotopic (that is, deformable) to a geometrically round circle, the standard unknot. The unknot is the only knot that is the boundary of an embedded disk, which gives the characterization that only unknots have Seifert genus 0. Similarly, the unknot is the identity element with respect to the knot sum operation. Unknotting problem Deciding if a particular knot is the unknot was a major driving force behind knot invariants, since it was thought this approach would possibly give an efficient algorithm to recognize the unknot from some presentation such as a knot diagram. Unknot recognition is known to be in both NP and co-NP. It is known that knot Floer homology and Khovanov homology detect the unknot, but these are not known to be efficiently computable for this purpose. It is not known whether the Jones polynomial or finite type invariants can detect the unknot. Examples It can be difficult to find a way to untangle string even though the fact it started out untangled proves the task is possible. Thistlethwaite and Ochiai provided many examples of diagrams of unknots that have no obvious way to simplify them, requiring one to temporarily increase the diagram's crossing number. While rope is generally not in the form of a closed loop, sometimes there is a canonical way to imagine the ends being joined together. From this point of view, many useful practical knots are actually the unknot, including those that can be tied in a bight. Every tame knot can be represented as a linkage, which is a collection of rigid line segments connected by universal joints at their endpoints. The stick number is the minimal number of segments needed to represent a knot as a linkage, and a stuck unknot is a particular unknotted linkage that cannot be reconfigured into a flat convex polygon. Like crossing number, a linkage might need to be made more complex by subdividing its segments before it can be simplified. Invariants The Alexander–Conway polynomial and Jones polynomial of the unknot are trivial: No other knot with 10 or fewer crossings has trivial Alexander polynomial, but the Kinoshita–Terasaka knot and Conway knot (both of which have 11 crossings) have the same Alexander and Conway polynomials as the unknot. It is an open problem whether any non-trivial knot has the same Jones polynomial as the unknot. The unknot is the only knot whose knot group is an infinite cyclic group, and its knot complement is homeomorphic to a solid torus. See also References External links Circles
Unknot
[ "Mathematics" ]
627
[ "Circles", "Pi" ]
60,087
https://en.wikipedia.org/wiki/Vaporware
In the computer industry, vaporware (or vapourware) is a product, typically computer hardware or software, that is announced to the general public but is late, never actually manufactured, or officially cancelled. Use of the word has broadened to include products such as automobiles. Vaporware is often announced months or years before its purported release, with few details about its development being released. Developers have been accused of intentionally promoting vaporware to keep customers from switching to competing products that offer more features. Network World magazine called vaporware an "epidemic" in 1989 and blamed the press for not investigating if developers' claims were true. Seven major companies issued a report in 1990 saying that they felt vaporware had hurt the industry's credibility. The United States accused several companies of announcing vaporware early enough to violate antitrust laws, but few have been found guilty. "Vaporware" was coined by a Microsoft engineer in 1982 to describe the company's Xenix operating system and appeared in print at least as early as the May 1983 issue of Sinclair User magazine (spelled as 'Vapourware' in UK English). It became popular among writers in the industry as a way to describe products they felt took too long to be released. InfoWorld magazine editor Stewart Alsop helped popularize it by lampooning Bill Gates with a Golden Vaporware award for the late release of his company's first version of Windows in 1985. Etymology "Vaporware", sometimes synonymous with "vaportalk" in the 1980s, has no single definition. It is generally used to describe a hardware or software product that has been announced, but that the developer is unlikely to release any time soon, if ever. The first reported use of the word was in 1982 by an engineer at the computer software company Microsoft. Ann Winblad, president of Open Systems Accounting Software, wanted to know if Microsoft planned to stop developing its Xenix operating system as some of Open System's products depended on it. She asked two Microsoft software engineers, John Ulett and Mark Ursino, who confirmed that development of Xenix had stopped. "One of them told me, 'Basically, it's vaporware'," she later said. Winblad compared the word to the idea of "selling smoke", implying Microsoft was selling a product it would soon not support. Winblad described the word to influential computer expert Esther Dyson, who published it for the first time in her monthly newsletter RELease 1.0. In an article titled "Vaporware" in the November 1983 issue of RELease 1.0, Dyson defined the word as "good ideas incompletely implemented". She described three software products shown at COMDEX in Las Vegas that year with bombastic advertisements. She stated that demonstrations of the "purported revolutions, breakthroughs and new generations" at the exhibition did not meet those claims. The practice existed before Winblad's account. In a January 1982 review of the new IBM Personal Computer, BYTE favorably noted that IBM "refused to acknowledge the existence of any product that is not ready to be put on dealers' shelves tomorrow. Although this is frustrating at times, it is a refreshing change from some companies' practice of announcing a product even before its design is finished". When discussing Coleco's delay in releasing the Adam, Creative Computing in March 1984 stated that the company "did not invent the common practice of debuting products before they actually exist. In microcomputers, to do so otherwise would be to break with a veritable tradition". After Dyson's article, the word "vaporware" became popular among writers in the personal computer software industry as a way to describe products they believed took too long to be released after their first announcement. InfoWorld magazine editor Stewart Alsop helped popularize its use by giving Bill Gates, then-CEO of Microsoft, with a Golden Vaporware award for Microsoft releasing Windows in 1985, 18 months late. Alsop presented it to Gates at a celebration for the release while the song "The Impossible Dream" played in the background. "Vaporware" took another meaning when it was used to describe a product that did not exist. A new company named Ovation Technologies announced its office suite Ovation in 1983. The company invested in an advertising campaign that promoted Ovation as a "great innovation", and showed a demonstration of the program at computer trade shows. The demonstration was well received by writers in the press, was featured in a cover story for an industry magazine, and reportedly created anticipation among potential customers. Executives later revealed that Ovation never existed. The company created the fake demonstration in an unsuccessful attempt to raise money to finish their product, and is "widely considered the mother of all vaporware," according to Laurie Flynn of The New York Times. Use of the term spread beyond the computer industry. Newsweek magazine's Allan Sloan described the manipulation of stocks by Yahoo! and Amazon.com as "financial vaporware" in 1997. Popular Science magazine uses a scale ranging from "vaporware" to "bet on it" to describe release dates of new consumer electronics. Car manufacturer General Motors' plans to develop and sell an electric car were called vaporware by an advocacy group in 2008 and Car and Driver magazine retroactively described the Vector W8 supercar as vaporware in 2017. Causes and use Late release A product missing its announced release date, and the labeling of it as vaporware by the press, can be caused by its development taking longer than planned. Most software products are not released on time, according to researchers in 2001 who studied the causes and effects of vaporware; "I hate to say yes, but yes", a Microsoft product manager stated in 1984, adding that "the problem isn't just at Microsoft". The phenomenon is so common that Lotus' release of 1-2-3 on time in January 1983, three months after announcing it, amazed many. Software development is a complex process, and developers are often uncertain how long it will take to complete any given project. Fixing errors in software, for example, can make up a significant portion of its development time, and developers are motivated not to release software with errors because it could damage their reputation with customers. Last-minute design changes are also common. Large organizations seem to have more late projects than smaller ones, and may benefit from hiring individual programmers on contract to write software rather than using in-house development teams. Adding people to a late software project does not help; according to Brooks' Law, doing so increases the delay. Not all delays in software are the developers' fault. In 1986, the American National Standards Institute adopted SQL as the standard database manipulation language. Software company Ashton-Tate was ready to release dBase IV, but pushed the release date back to add support for SQL. The company believed that the product would not be competitive without it. As the word became more commonly used by writers in the mid-1980s, InfoWorld magazine editor James Fawcette wrote that its negative connotations were unfair to developers because of these types of circumstances. Duke Nukem Vaporware also includes announced products that are never released because of financial problems, or because the industry changes during its development. When 3D Realms first announced Duke Nukem Forever in 1997, the video game was early in its development. The company's previous game released in 1996, Duke Nukem 3D, was a critical and financial success, and customer anticipation for its sequel was high. As personal computer hardware speeds improved at a rapid pace in the late 1990s, it created an "arms race" between companies in the video game industry, according to Wired News. 3D Realms repeatedly moved the release date back over the next 12 years to add new, more advanced features. By the time 3D Realms went out of business in 2009 with the game still unreleased, Duke Nukem Forever had become synonymous with the word "vaporware" among industry writers. The game was revived and released in 2011. However, due to a 13-year period of fan anticipation and design changes in the industry, the game received a mostly negative reception from critics and fans. A company notorious for vaporware can improve its reputation. In the 1980s, video game maker Westwood Studios was known for shipping products late. However, by 1993, it had so improved that Computer Gaming World reported "many publishers would assure [us] that a project was going to be completed on time because Westwood was doing it". Early announcement Announcing products early—months or years before their release date, also called "preannouncing", has been an effective way by some developers to make their products successful. It can be seen as a legitimate part of their marketing strategy, but is generally not popular with industry press. The first company to release a product in a given market often gains an advantage. It can set the standard for similar future products, attract a large number of customers, and establish its brand before competitor's products are released. Public relations firm Coakley-Heagerty used an early announcement in 1984 to build interest among potential customers. Its client was Nolan Bushnell, formerly of Atari Inc. who wanted to promote the new Sente Technologies, but his contract with Atari prohibited doing so until a later date. The firm created an advertising campaign—including brochures and a shopping-mall appearance—around a large ambiguous box covered in brown paper to increase curiosity until Sente could be announced. Early announcements send signals not only to customers and the media, but also to providers of support products, regulatory agencies, financial analysts, investors, and other parties. For example, an early announcement can relay information to vendors, letting them know to prepare marketing and shelf space. It can signal third-party developers to begin work on their own products, and it can be used to persuade a company's investors that they are actively developing new, profitable ideas. When IBM announced its Professional Workstation computer in 1986, they noted the lack of third-party programs written for it at the time, signaling those developers to start preparing. Microsoft usually announces information about its operating systems early because third-party developers are dependent on that information to develop their own products. A developer can strategically announce a product that is in the early stages of development, or before development begins, to gain competitive advantage over other developers. In addition to the "vaporware" label, this is also called "ambush marketing", and "fear, uncertainty and doubt" (FUD) by the press. If the announcing developer is a large company, this may be done to influence smaller companies to stop development of similar products. The smaller company might decide their product will not be able to compete, and that it is not worth the development costs. It can also be done in response to a competitor's already released product. The goal is to make potential customers believe a second, better product will be released soon. The customer might reconsider buying from the competitor, and wait. In 1994, as customer anticipation increased for Microsoft's new version of Windows (codenamed "Chicago"), Apple announced a set of upgrades to its own System 7 operating system that were not due to be released until nearly two years later. The Wall Street Journal wrote that Apple did this to "blunt Chicago's momentum". A premature announcement can cause others to respond with their own. When VisiCorp announced Visi On in November 1982, it promised to ship the product by spring 1983. The news forced Quarterdeck Office Systems to announce in April 1983 that its DESQ would ship in November 1983. Microsoft responded by announcing Windows 1.0 in fall 1983, and Ovation Technologies followed by announcing Ovation in November. InfoWorld noted in May 1984 that of the four products only Visi On had shipped, albeit more than a year late and with only two supported applications. Industry publications widely accused companies of using early announcements intentionally to gain competitive advantage over others. In his 1989 Network World article, Joe Mohen wrote the practice had become a "vaporware epidemic", and blamed the press for not investigating claims by developers. "If the pharmaceutical industry were this careless, I could announce a cure for cancer today – to a believing press." In 1985 Stewart Alsop began publishing his influential monthly Vaporlist, a list of companies he felt announced their products too early, hoping to dissuade them from the practice; among the entries in January 1988 were a Verbatim Corp. optical drive that was 30 months late, WordPerfect for Macintosh (12 months), IBM OS/2 1.1 (nine months), and Lotus 1-2-3 for OS/2 and Macintosh (nine and three months late, respectively). Wired Magazine began publishing a similar list in 1997. Seven major software developers—including Ashton-Tate, Hewlett-Packard, and Sybase—formed a council in 1990, and issued a report condemning the "vacuous product announcement dubbed vaporware and other misrepresentations of product availability" because they felt it had hurt the industry's credibility. Antitrust allegations In the United States, announcing a product that does not exist to gain a competitive advantage is illegal via Section 2 of the Sherman Antitrust Act of 1890, but few hardware or software developers have been found guilty of it. The section requires proof that the announcement is both provably false, and has actual or likely market impact. False or misleading announcements designed to influence stock prices are illegal under United States securities fraud laws. The complex and changing nature of the computer industry, marketing techniques, and lack of precedent for applying these laws to the industry can mean developers are not aware their actions are illegal. The U.S. Securities and Exchange Commission issued a statement in 1984 with the goal of reminding companies that securities fraud also applies to "statements that can reasonably be expected to reach investors and the trading markets". Several companies have been accused in court of using knowingly false announcements to gain market advantage. In 1969, the United States Justice Department accused IBM of doing this in the case United States v. IBM. After IBM's competitor, Control Data Corporation (CDC), released a computer, IBM announced the System/360 Model 91. The announcement resulted in a significant reduction in sales of CDC's product. The Justice Department accused IBM of doing this intentionally because the System/360 Model 91 was not released until two years later. IBM avoided preannouncing products during the antitrust case, but after the case ended it resumed the practice. The company likely announced its PCjr in November 1983—four months before general availability in March 1984—to hurt sales of rival home computers during the important Christmas sales season. In 1985 The New York Times wrote The practice was not called "vaporware" at the time, but publications have since used the word to refer specifically to it. Similar cases have been filed against Kodak, AT&T, and Xerox. US District Judge Stanley Sporkin was a vocal opponent of the practice during his review of the settlement resulting from United States v. Microsoft Corp. in 1994. "Vaporware is a practice that is deceitful on its face and everybody in the business community knows it," said Sporkin. One of the accusations made during the trial was that Microsoft has illegally used early announcements. The review began when three anonymous companies protested the settlement, claiming the government did not thoroughly investigate Microsoft's use of the practice. Specifically, they claimed Microsoft announced its Quick Basic 3 program to slow sales of its competitor Borland's recently released Turbo Basic program. The review was dismissed for lack of explicit proof. See also List of vaporware List of commercial failures in video games Technology demonstration Osborne effect Development hell Abandonware Notes References External links Community Memory postings from 1996 on the term's origins crediting Ann Winblad and Stewart Alsop. RELease 1.0 November 1983 — a scanned copy of Esther Dyson's original article Wired Magazine Vaporware Awards Vaporware 1997: We Hardly Knew Ye Vaporware 1998: Windows NT Wins Vaporware 1999: The 'Winners' Vaporware 2000: Missing Inaction Vaporware 2001: Empty Promises Vaporware 2002: Tech up in Smoke? Vaporware 2003: Nuke 'Em if Ya Got 'Em Vaporware 2004: Phantom Haunts Us All Vaporware 2005: Better Late Than Never Vaporware 2006: Return of the King Vaporware 2007: Long Live the King Vaporware 2008: Crushing Disappointments, False Promises and Plain Old BS Vaporware 2009: Inhale the Fail Vaporware 2010: The Great White Duke Software release
Vaporware
[ "Technology" ]
3,370
[ "Computer industry", "Vaporware" ]
60,088
https://en.wikipedia.org/wiki/Roentgenium
Roentgenium () is a synthetic chemical element; it has symbol Rg and atomic number 111. It is extremely radioactive and can only be created in a laboratory. The most stable known isotope, roentgenium-282, has a half-life of 130 seconds, although the unconfirmed roentgenium-286 may have a longer half-life of about 10.7 minutes. Roentgenium was first created in December 1994 by the GSI Helmholtz Centre for Heavy Ion Research near Darmstadt, Germany. It is named after the physicist Wilhelm Röntgen (also spelled Roentgen), who discovered X-rays. Only a few roentgenium atoms have ever been synthesized, and they have no practical application. In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and is placed in the group 11 elements, although no chemical experiments have been carried out to confirm that it behaves as the heavier homologue to gold in group 11 as the ninth member of the 6d series of transition metals. Roentgenium is calculated to have similar properties to its lighter homologues, copper, silver, and gold, although it may show some differences from them. Introduction History Official discovery Roentgenium was first synthesized by an international team led by Sigurd Hofmann at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany, on December 8, 1994. The team bombarded a target of bismuth-209 with accelerated nuclei of nickel-64 and detected three nuclei of the isotope roentgenium-272: + → + This reaction had previously been conducted at the Joint Institute for Nuclear Research in Dubna (then in the Soviet Union) in 1986, but no atoms of 272Rg had then been observed. In 2001, the IUPAC/IUPAP Joint Working Party (JWP) concluded that there was insufficient evidence for the discovery at that time. The GSI team repeated their experiment in 2002 and detected three more atoms. In their 2003 report, the JWP decided that the GSI team should be acknowledged for the discovery of this element. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, roentgenium should be known as eka-gold. In 1979, IUPAC published recommendations according to which the element was to be called unununium (with the corresponding symbol of Uuu), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it element 111, with the symbol of E111, (111) or even simply 111. The name roentgenium (Rg) was suggested by the GSI team in 2004, to honor the German physicist Wilhelm Conrad Röntgen, the discoverer of X-rays. This name was accepted by IUPAC on November 1, 2004. Isotopes Roentgenium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusion of the nuclei of lighter elements or as intermediate decay products of heavier elements. Nine different isotopes of roentgenium have been reported with atomic masses 272, 274, 278–283, and 286 (283 and 286 unconfirmed), two of which, roentgenium-272 and roentgenium-274, have known but unconfirmed metastable states. All of these decay through alpha decay or spontaneous fission, though 280Rg may also have an electron capture branch. Stability and half-lives All roentgenium isotopes are extremely unstable and radioactive; in general, the heavier isotopes are more stable than the lighter. The most stable known roentgenium isotope, 282Rg, is also the heaviest known roentgenium isotope; it has a half-life of 100 seconds. The unconfirmed 286Rg is even heavier and appears to have an even longer half-life of about 10.7 minutes, which would make it one of the longest-lived superheavy nuclides known; likewise, the unconfirmed 283Rg appears to have a long half-life of about 5.1 minutes. The isotopes 280Rg and 281Rg have also been reported to have half-lives over a second. The remaining isotopes have half-lives in the millisecond range. The missing isotopes between 274Rg and 278Rg are too light to be produced by hot fusion and too heavy to be produced by cold fusion. A possible synthesis method is to populate them from above, as daughters of nihonium or moscovium isotopes that can be produced by hot fusion. The isotopes 283Rg and 284Rg could be synthesised using charged-particle evaporation, using the 238U+48Ca reaction where a proton is evaporated alongside some neutrons. Predicted properties Other than nuclear properties, no properties of roentgenium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that roentgenium (and its parents) decays very quickly. Properties of roentgenium metal remain unknown and only predictions are available. Chemical Roentgenium is the ninth member of the 6d series of transition metals. Calculations on its ionization potentials and atomic and ionic radii are similar to that of its lighter homologue gold, thus implying that roentgenium's basic properties will resemble those of the other group 11 elements, copper, silver, and gold; however, it is also predicted to show several differences from its lighter homologues. Roentgenium is predicted to be a noble metal. The standard electrode potential of 1.9 V for the Rg3+/Rg couple is greater than that of 1.5 V for the Au3+/Au couple. Roentgenium's predicted first ionisation energy of 1020 kJ/mol almost matches that of the noble gas radon at 1037 kJ/mol. Its predicted second ionization energy, 2070 kJ/mol, is almost the same as that of silver. Based on the most stable oxidation states of the lighter group 11 elements, roentgenium is predicted to show stable +5 and +3 oxidation states, with a less stable +1 state. The +3 state is predicted to be the most stable. Roentgenium(III) is expected to be of comparable reactivity to gold(III), but should be more stable and form a larger variety of compounds. Gold also forms a somewhat stable −1 state due to relativistic effects, and it has been suggested roentgenium may do so as well: nevertheless, the electron affinity of roentgenium is expected to be around , significantly lower than gold's value of , so roentgenides may not be stable or even possible. The 6d orbitals are destabilized by relativistic effects and spin–orbit interactions near the end of the fourth transition metal series, thus making the high oxidation state roentgenium(V) more stable than its lighter homologue gold(V) (known only in gold pentafluoride, Au2F10) as the 6d electrons participate in bonding to a greater extent. The spin-orbit interactions stabilize molecular roentgenium compounds with more bonding 6d electrons; for example, is expected to be more stable than , which is expected to be more stable than . The stability of is homologous to that of ; the silver analogue is unknown and is expected to be only marginally stable to decomposition to and F2. Moreover, Rg2F10 is expected to be stable to decomposition, exactly analogous to the Au2F10, whereas Ag2F10 should be unstable to decomposition to Ag2F6 and F2. Gold heptafluoride, AuF7, is known as a gold(V) difluorine complex AuF5·F2, which is lower in energy than a true gold(VII) heptafluoride would be; RgF7 is instead calculated to be more stable as a true roentgenium(VII) heptafluoride, although it would be somewhat unstable, its decomposition to Rg2F10 and F2 releasing a small amount of energy at room temperature. Roentgenium(I) is expected to be difficult to obtain. Gold readily forms the cyanide complex , which is used in its extraction from ore through the process of gold cyanidation; roentgenium is expected to follow suit and form . The probable chemistry of roentgenium has received more interest than that of the two previous elements, meitnerium and darmstadtium, as the valence s-subshells of the group 11 elements are expected to be relativistically contracted most strongly at roentgenium. Calculations on the molecular compound RgH show that relativistic effects double the strength of the roentgenium–hydrogen bond, even though spin–orbit interactions also weaken it by . The compounds AuX and RgX, where X = F, Cl, Br, O, Au, or Rg, were also studied. Rg+ is predicted to be the softest metal ion, even softer than Au+, although there is disagreement on whether it would behave as an acid or a base. In aqueous solution, Rg+ would form the aqua ion [Rg(H2O)2]+, with an Rg–O bond distance of 207.1 pm. It is also expected to form Rg(I) complexes with ammonia, phosphine, and hydrogen sulfide. Physical and atomic Roentgenium is expected to be a solid under normal conditions and to crystallize in the body-centered cubic structure, unlike its lighter congeners which crystallize in the face-centered cubic structure, due to its being expected to have different electron charge densities from them. It should be a very heavy metal with a density of around 22–24 g/cm3; in comparison, the densest known element that has had its density measured, osmium, has a density of 22.61 g/cm3. The atomic radius of roentgenium is expected to be around 138 pm. Experimental chemistry Unambiguous determination of the chemical characteristics of roentgenium has yet to have been established due to the low yields of reactions that produce roentgenium isotopes. For chemical studies to be carried out on a transactinide, at least four atoms must be produced, the half-life of the isotope used must be at least 1 second, and the rate of production must be at least one atom per week. Even though the half-life of 282Rg, the most stable confirmed roentgenium isotope, is 100 seconds, long enough to perform chemical studies, another obstacle is the need to increase the rate of production of roentgenium isotopes and allow experiments to carry on for weeks or months so that statistically significant results can be obtained. Separation and detection must be carried out continuously to separate out the roentgenium isotopes and allow automated systems to experiment on the gas-phase and solution chemistry of roentgenium, as the yields for heavier elements are predicted to be smaller than those for lighter elements. However, the experimental chemistry of roentgenium has not received as much attention as that of the heavier elements from copernicium to livermorium, despite early interest in theoretical predictions due to relativistic effects on the ns subshell in group 11 reaching a maximum at roentgenium. The isotopes 280Rg and 281Rg are promising for chemical experimentation and may be produced as the granddaughters of the moscovium isotopes 288Mc and 289Mc respectively; their parents are the nihonium isotopes 284Nh and 285Nh, which have already received preliminary chemical investigations. See also Island of stability Explanatory notes References General bibliography External links Roentgenium at The Periodic Table of Videos (University of Nottingham) Chemical elements Chemical elements with body-centered cubic structure Transition metals Synthetic elements
Roentgenium
[ "Physics", "Chemistry" ]
2,554
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
60,094
https://en.wikipedia.org/wiki/Promiscuity
Promiscuity is the practice of engaging in sexual activity frequently with different partners or being indiscriminate in the choice of sexual partners. The term can carry a moral judgment. A common example of behavior viewed as promiscuous by many cultures is the one-night stand, and its frequency is used by researchers as a marker for promiscuity. What sexual behavior is considered promiscuous varies between cultures, as does the prevalence of promiscuity. Different standards are often applied to different genders and civil statutes. Feminists have traditionally argued a significant double standard exists between how men and women are judged for promiscuity. Historically, stereotypes of the promiscuous woman have tended to be pejorative, such as "the slut" or "the harlot", while male stereotypes have been more varied, some expressing approval, such as "the stud" or "the player", while others imply societal deviance, such as "the womanizer" or "the philanderer". A scientific study published in 2005 found that promiscuous men and women are both prone to derogatory judgment. Promiscuity is common in many animal species. Some species have promiscuous mating systems, ranging from polyandry and polygyny to mating systems with no stable relationships where mating between two individuals is a one-time event. Many species form stable pair bonds, but still mate with other individuals outside the pair. In biology, incidents of promiscuity in species that form pair bonds are usually called extra-pair copulations. Motivations Accurately assessing people's sexual behavior is difficult, since strong social and personal motivations occur, depending on social sanctions and taboos, for either minimizing or exaggerating reported sexual activity. American experiments in 1978 and 1982 found the great majority of men were willing to have sex with women they did not know, of average attractiveness, who propositioned them. No woman, by contrast, agreed to such propositions from men of average attractiveness. While men were in general comfortable with the requests, regardless of their willingness, women responded with shock and disgust. The number of sexual partners people have had in their lifetimes varies widely within a population. We see a higher number of people who are more comfortable with their sexuality in the modern world. A 2007 nationwide survey in the United States found the median number of female sexual partners reported by men was seven and the median number of male partners reported by women was four. The men possibly exaggerated their reported number of partners, women reported a number lower than the actual number, or a minority of women had a sufficiently larger number than most other women to create a mean significantly higher than the median, or all of the above. About 29% of men and 9% of women reported to have had more than 15 sexual partners in their lifetimes. Studies of the spread of sexually transmitted infections consistently demonstrate a small percentage of the studied population has more partners than the average man or woman, and a smaller number of people have fewer than the statistical average. An important question in the epidemiology of sexually transmitted infections is whether or not these groups copulate mostly at random with sexual partners from throughout a population or within their social groups. A 2006 systematic review analyzing data from 59 countries worldwide found no association between regional sexual behavior tendencies, such as number of sexual partners, and sexual-health status. Much more predictive of sexual-health status are socioeconomic factors like poverty and mobility. Other studies have suggested that people with multiple casual sex partners are more likely to be diagnosed with sexually transmitted infections. Severe and impulsive promiscuity, along with a compulsive urge to engage in illicit sex with attached individuals is a common symptom of borderline personality disorder, histrionic personality disorder, narcissistic personality disorder, and antisocial personality disorder but most promiscuous individuals do not have these disorders. Cross-cultural studies In 2008, a U.S. university study of international promiscuity found that Finns have had the largest number of sex partners in the industrialized world, and British people have the largest number among big western industrial nations. The study measured one-night stands, attitudes to casual sex, and number of sexual partners. A 2014 nationwide survey in the United Kingdom named Liverpool the country's most promiscuous city. Britain's position on the international index "may be linked to increasing social acceptance of promiscuity among women as well as men". Britain's ranking was "ascribed to factors such as the decline of religious scruples about extramarital sex, the growth of equal pay and equal rights for women, and a highly sexualized popular culture". The top-10-ranking OECD nations with a population over 10 million on the study's promiscuity index, in descending order, were the United Kingdom, Germany, the Netherlands, Czechia, Australia, the United States, France, Turkey, Mexico, and Canada. A 2017 survey by Superdrug found that the United Kingdom was the country with the most sex partners with an average of 7, while Austria had around 6.5. The 2012 Trojan Sex Life Survey found that African American men reported an average of 38 sex partners in their lifetime. A study funded by condom-maker Durex, conducted in 2006 and published in 2009, measured promiscuity by a total number of sexual partners. The survey found Austrian men had the highest number of sex partners globally, with 29.3 sexual partners on average. New Zealand women had the highest number of sex partners for females in the world with an average of 20.4 sexual partners. In all of the countries surveyed, except New Zealand, men reported more sexual partners than women. One review found the people from developed Western countries had more sex partners than people from developing countries in general, while the rate of STIs was higher in developing countries. According to the 2005 Global Sex Survey by Durex, people have had on average nine sexual partners, the most in Turkey (14.5) and Australia (13.3), and the fewest in India (3) and China (3.1). In many cases, the population of each country that participates is approximately 1000 people and can equate to less than 0.0003% of the population, e.g. the 2017 survey of 42 nations surveyed only 33,000 people. In India, data was collected from less than 0.000001% of the total population at that time. According to the 2012 General Social Survey in the United States by the National Opinion Research Center at the University of Chicago, Protestants on average had more sex partners than Catholics. Similarly, a 2019 study by the Institute for Family Studies in the US found that of never married young people, Protestants have more sexual partners than Catholics. Male promiscuity Straight men (heterosexuals) A 1994 study in the United States, which looked at the number of sexual partners in a lifetime, found 20% of heterosexual men had one partner, 55% had two to 20 partners, and 25% had more than 20 sexual partners. More recent studies have reported similar numbers. In the United Kingdom, a nationally representative study in 2013 found that 33.9% of heterosexual men had 10 or more lifetime sexual partners. Among men between 45 and 54 years old, 43.1% reported 10 or more sexual partners. A 2003 representative study in Australia found that heterosexual men had a median of 8 female sexual partners in their lifetime. For lifetime sexual partners: 5.8% had 0 partners, 10.3% had 1 partner, 6.1% had 2 partners, 33% had between 3 and 9 partners, 38.3% had between 10 and 49 partners and 6.6% had more than 50 female sexual partners. A 2014 representative study in Australia found that heterosexual men had a median of 7.8 female sexual partners in their lifetime. For lifetime sexual partners: 3.7% had 0 partners, 12.6% had 1 partner, 6.8% had 2 partners, 32.3% had between 3 and 9 partners, 36.9% had between 10 and 49 partners and 7.8% had more than 50 female sexual partners. Research by J. Michael Bailey found that heterosexual men had the same level of interest in casual sex as gay men. However he found straight men were limited in their ability to acquire high numbers of female partners. According to Bailey, "These facts suggest that women are responsible for the pace of sex. Gay and straight men both want casual sex, but only straight men have the brake of women’s sexually cautious nature to slow them." Gay men (homosexuals) A 1989 study found having over 100 partners to be present though rare among homosexual males. An extensive 1994 study found that difference in the mean number of sexual partners between gay and straight men "did not appear very large". A 2007 study reported that two large population surveys found "the majority of gay men had similar numbers of unprotected sexual partners annually as straight men and women." The 2013 British NATSAL study found that gay men typically had 19 sexual partners in a lifetime (median). In the previous year, 51.8% reported having either 0 or 1 sexual partner. A further 21.3% reported having between 2 and 4 sexual partners, 7.3% reported having between 5 and 9, and 19.6% reported having 10 or more sexual partners. A 2014 study in Australia found gay men had a median of 22 sexual partners in a lifetime (sexual partner was defined as any sexual contact, kissing, touching or intercourse). 30% of gay respondents reported 0–9 partners in their lifetime. 50.1% of gay men reported having either 0 or 1 partner in the previous year, while 25.6% reported 10 or more partners in the previous year. Research on gay sexual behavior may overrepresent promiscuous respondents. This is because gay men are a small portion of the male population, and thus many researchers have relied on convenience surveys to research behavior of gay men. Examples of this type of sampling includes surveying men on dating apps such as Grindr, or finding volunteers at gay bars, clubs and saunas. Convenience surveys often exclude gay men who are in a relationship, and gay men who do not use dating apps or attend gay venues. For example, the British and European convenience surveys included approximately five times as many gay men who reported "5 or more sexual partners" than the nationally representative NATSAL study did. Probability sample surveys are more useful in this regard, because they seek to accurately reflect the characteristics of the gay male population. Examples include the NATSAL in the United Kingdom and the General Social Survey in the United States. According to John Corvino, opponents of gay rights often use convenience sample statistics to support their belief that gay men are promiscuous, but that larger representative samples show that the difference is not so large, and that extreme promiscuity occurs in a minority of gay men. Psychologist J. Michael Bailey has stated that social conservatives use promiscuity among gay men as evidence of a "decadent" nature of gay men, but says "I think they're wrong. Promiscuous gay men are expressing an essentially masculine trait. They are doing what most heterosexual men would do if they could. They are in this way just like heterosexual men, except that they don't have women to constrain them." Regarding sexually transmitted infections (STIs), some researchers have said that the number of sexual partners had by gay men cannot fully explain rates of HIV infection in this population. Most gay men report having similar numbers of unprotected sexual partners as straight men on an annual basis. Unprotected receptive anal sex, which holds a much higher risk of HIV transmission, appears to be the major factor. Female promiscuity In 1994, a study in the United States found almost all married heterosexual women reported having sexual contact only with their husbands, and unmarried women almost always reported having no more than one sexual partner in the past three months. Lesbians who had long-term partners reported having fewer outside partners than heterosexual women. More recent research, however, contradicts the assertion that heterosexual women are largely monogamous. A 2002 study estimated that 45% to 55% of married heterosexual women engage in sexual relationships outside of their marriage, while the estimate for heterosexual men engaging in the same conduct was 50–60% in the same study. One possible explanation for hyper sexuality is child sexual abuse (CSA) trauma. Many studies have examined the correlation between CSA and risky sexual behavior. Rodriguez-Srednicki and Ofelia examined the correlation of CSA experienced by women and their self-destructive behavior as adults using a questionnaire. The diversity and ages of the women varied. Slightly fewer than half the women reported CSA while the remainder reported no childhood trauma. The results of the study determined that self-destructive behaviors, including hypersexuality, correlates with CSA in women. CSA can create sexual schemas that result in risky sexual behavior. This can play out in their sexual interactions as girls get older. The sexual behaviors of women that experienced CSA differed from those of women without exposure to CSA. Studies show CSA survivors tend to have more sexual partners and engage in higher risk sexual behaviors. Since at least 1450, the word 'slut' has been used, often pejoratively, to describe a sexually promiscuous woman. In and before the Elizabethan and Jacobean eras, terms like "strumpet" and "whore" were used to describe women deemed promiscuous, as seen, for example, in John Webster's 1612 play The White Devil. Thornhill and Gangestad found that women are much more likely to sexually fantasize about and be attracted to extra-pair men during the fertile phase of the menstrual cycle than the luteal phase, whereas attraction to the primary partner does not change depending on the menstrual cycle. A 2004 study by Pillsworth, Hasselton and Buss contradicted this, finding greater in-pair sexual attraction during this phase and no increase in attraction to extra-pair men. In Norwegian students, Kennair et al. (2023) found no signs of a sexual double standard in short-term or long-term mating contexts, nor in choosing a friend, except that women's self-stimulation was more acceptable than men's. Evolution Evolutionary psychologists propose that a conditional human tendency for promiscuity is inherited from hunter-gatherer ancestors. Promiscuity increases the likelihood of having children, thus "evolutionary" fitness. According to them, female promiscuity is advantageous in that it allows females to choose fathers for their children who have better genes than their mates, to ensure better care for their offspring, have more children, and as a form of fertility insurance. Male promiscuity was likely advantageous because it allowed males to father more children. Primitive promiscuity Primitive promiscuity or original promiscuity was the 19th-century hypothesis that humans originally lived in a state of promiscuity or "hetaerism" before the advent of society as we understand it. Hetaerism is a theoretical early state of human society, as postulated by 19th-century anthropologists, which was characterized by the absence of the institution of marriage in any form and in which women were the common property of their tribe and in which children never knew who their fathers were. The reconstruction of the original state of primitive society or humanity was based on the idea of progress, according to which all cultures have degrees of improvement and becoming more complicated. It seemed logical to assume that never before the types of families developed did they simply exist, and in primitive society, sexual relations were without any boundaries and taboos. This view is represented, inter alia, by anthropologist Lewis H. Morgan in Ancient Society and Friedrich Engels' work The Origin of the Family, Private Property and the State. In the first half of the 20th century, this notion was rejected by a number of authors, e.g. Edvard Westermarck, a Finnish philosopher, social anthropologist and sociologist with in-depth knowledge of the history of marriage, who provided strong evidence that, at least in the first stages of cultural development, monogamy has been a perfectly normal and natural form of man-woman coexistence. Modern cultural anthropology has not confirmed the existence of a complete promiscuity in any known society or culture. The evidence of history is reduced to some texts of Herodotus, Strabo, and Solinus, which have been hard to interpret. Religious, social, and cultural views Christianity, Judaism, and Islam condemn promiscuity and instead advocate lifelong monogamous marriage (although Islam allows polygamy for men). The perspectives on promiscuity vary significantly depending on the region. Every country has different values and morals pertaining to sexual life. Promiscuity has been practiced in hippie communities and other alternative subcultures since the 1960s cultural revolution. Sex and Culture is a book by J. D. Unwin concerning the correlation between a society's level of 'cultural achievement' and its level of sexual restraint. Published in 1934, the book concluded with the theory that as societies develop, they become more sexually liberal, accelerating the social entropy of the society, and thereby diminishing its "creative" and "expansive" energy. Other animals Some researchers have suggested that the practice of referring to animals as promiscuous in reference to their mating system is often inaccurate and potentially biased. More precise terms such as polyandry, polygyny, and polygynandry are increasingly preferred. Many animal species, such as spotted hyenas, pigs, bonobos and chimpanzees, are promiscuous as a rule, and do not form pair bonds. Although social monogamy occurs in about 90% of avian species and about 3% of mammalian species, an estimated 90% of socially monogamous species exhibit individual promiscuity in the form of copulation outside the pair bond. In the animal world, some species, including birds such as swans and fish such as Neolamprologus pulcher, once believed monogamous, are now known to engage in extra-pair copulations. One example of extra-pair fertilization (EPF) in birds is the black-throated blue warblers. Though it is a socially monogamous species, both males and females engage in EPF. The Darwin-Bateman paradigm, which states that males are typically eager to copulate while females are more choosy about whom to mate with, has been confirmed by a meta-analysis. There is, however, continued debate about the utility and pitfalls of the Bateman perspective. See also Cottaging Emotional promiscuity Female promiscuity Monogamy Polyamory Polyandry Polygamy Polygynandry Prostitution Sexual addiction Sexual revolution Sociosexual orientation Sperm competition Swinging References Bibliography Chakov, Kelly Nineteen Century Social Evolutionism Fortes, Meyer (2005) Kinship and the Social Order: The Legacy of Lewis Henry Morgan pp. 7–8 Lehrman, Sally The Virtues of Promiscuity (2002) Lerner, Gerda (1986) Women and History vol. 1: The Creation of Patriarchy Lerner, Gerda "The Origin of Prostitution in Ancient Mesopotamia". Signs, Vol. 11, No. 2 (Winter, 1986), pp. 236–54 Schmitt, David P. "Sociosexuality from Argentina to Zimbabwe: A 48-nation study of sex, culture, and strategies of human mating". Behavioral and Brain Sciences (2005) 28, 247–311 Miller Jr., Gerrit S. (1931) "The Primate Basis of Human Sexual Behavior". The Quarterly Review of Biology, Vol. 6, No. 4 (Dec., 1931), pp. 379–410 Westermarck, Edward [1891] (2003) History of Human Marriage Part 1 Kessinger Publishing Weston, Kath (1998) Long Slow Burn: Sexuality and Social Science Woock, Randy (2002) Promiscuous Women Should Be Praised Rinaldi, Robin, The Wild Oats Project: One Woman's Midlife Quest for Passion at Any Cost, Sarah Crichton Books (2015), hardcover, 304 pages Human sexuality Anthropology Casual sex Free love
Promiscuity
[ "Biology" ]
4,228
[ "Human sexuality", "Behavior", "Sexuality", "Human behavior", "Promiscuity" ]
60,097
https://en.wikipedia.org/wiki/Topological%20ring
In mathematics, a topological ring is a ring that is also a topological space such that both the addition and the multiplication are continuous as maps: where carries the product topology. That means is an additive topological group and a multiplicative topological semigroup. Topological rings are fundamentally related to topological fields and arise naturally while studying them, since for example completion of a topological field may be a topological ring which is not a field. General comments The group of units of a topological ring is a topological group when endowed with the topology coming from the embedding of into the product as However, if the unit group is endowed with the subspace topology as a subspace of it may not be a topological group, because inversion on need not be continuous with respect to the subspace topology. An example of this situation is the adele ring of a global field; its unit group, called the idele group, is not a topological group in the subspace topology. If inversion on is continuous in the subspace topology of then these two topologies on are the same. If one does not require a ring to have a unit, then one has to add the requirement of continuity of the additive inverse, or equivalently, to define the topological ring as a ring that is a topological group (for ) in which multiplication is continuous, too. Examples Topological rings occur in mathematical analysis, for example as rings of continuous real-valued functions on some topological space (where the topology is given by pointwise convergence), or as rings of continuous linear operators on some normed vector space; all Banach algebras are topological rings. The rational, real, complex and -adic numbers are also topological rings (even topological fields, see below) with their standard topologies. In the plane, split-complex numbers and dual numbers form alternative topological rings. See hypercomplex numbers for other low-dimensional examples. In commutative algebra, the following construction is common: given an ideal in a commutative ring the -adic topology on is defined as follows: a subset of is open if and only if for every there exists a natural number such that This turns into a topological ring. The -adic topology is Hausdorff if and only if the intersection of all powers of is the zero ideal The -adic topology on the integers is an example of an -adic topology (with ). Completion Every topological ring is a topological group (with respect to addition) and hence a uniform space in a natural manner. One can thus ask whether a given topological ring is complete. If it is not, then it can be completed: one can find an essentially unique complete topological ring that contains as a dense subring such that the given topology on equals the subspace topology arising from If the starting ring is metric, the ring can be constructed as a set of equivalence classes of Cauchy sequences in this equivalence relation makes the ring Hausdorff and using constant sequences (which are Cauchy) one realizes a (uniformly) continuous morphism (CM in the sequel) such that, for all CM where is Hausdorff and complete, there exists a unique CM such that If is not metric (as, for instance, the ring of all real-variable rational valued functions, that is, all functions endowed with the topology of pointwise convergence) the standard construction uses minimal Cauchy filters and satisfies the same universal property as above (see Bourbaki, General Topology, III.6.5). The rings of formal power series and the -adic integers are most naturally defined as completions of certain topological rings carrying -adic topologies. Topological fields Some of the most important examples are topological fields. A topological field is a topological ring that is also a field, and such that inversion of non zero elements is a continuous function. The most common examples are the complex numbers and all its subfields, and the valued fields, which include the -adic fields. See also Citations References Vladimir I. Arnautov, Sergei T. Glavatsky and Aleksandr V. Michalev: Introduction to the Theory of Topological Rings and Modules. Marcel Dekker Inc, February 1996, . N. Bourbaki, Éléments de Mathématique. Topologie Générale. Hermann, Paris 1971, ch. III §6 Ring theory Topology Topological algebra Topological groups
Topological ring
[ "Physics", "Mathematics" ]
886
[ "Topological groups", "Ring theory", "Space (mathematics)", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Geometry", "Topological algebra", "Spacetime" ]
60,123
https://en.wikipedia.org/wiki/I-adic%20topology
In commutative algebra, the mathematical study of commutative rings, adic topologies are a family of topologies on the underlying set of a module, generalizing the -adic topologies on the integers. Definition Let be a commutative ring and an -module. Then each ideal of determines a topology on called the -adic topology, characterized by the pseudometric The family is a basis for this topology. An -adic topology is a linear topology (a topology generated by some submodules). Properties With respect to the topology, the module operations of addition and scalar multiplication are continuous, so that becomes a topological module. However, need not be Hausdorff; it is Hausdorff if and only ifso that becomes a genuine metric. Related to the usual terminology in topology, where a Hausdorff space is also called separated, in that case, the -adic topology is called separated. By Krull's intersection theorem, if is a Noetherian ring which is an integral domain or a local ring, it holds that for any proper ideal of . Thus under these conditions, for any proper ideal of and any -module , the -adic topology on is separated. For a submodule of , the canonical homomorphism to induces a quotient topology which coincides with the -adic topology. The analogous result is not necessarily true for the submodule itself: the subspace topology need not be the -adic topology. However, the two topologies coincide when is Noetherian and finitely generated. This follows from the Artin-Rees lemma. Completion When is Hausdorff, can be completed as a metric space; the resulting space is denoted by and has the module structure obtained by extending the module operations by continuity. It is also the same as (or canonically isomorphic to): where the right-hand side is an inverse limit of quotient modules under natural projection. For example, let be a polynomial ring over a field and the (unique) homogeneous maximal ideal. Then , the formal power series ring over in variables. Closed submodules The -adic closure of a submodule is This closure coincides with whenever is -adically complete and is finitely generated. is called Zariski with respect to if every ideal in is -adically closed. There is a characterization: is Zariski with respect to if and only if is contained in the Jacobson radical of . In particular a Noetherian local ring is Zariski with respect to the maximal ideal. References Sources Commutative algebra Topology
I-adic topology
[ "Physics", "Mathematics" ]
543
[ "Fields of abstract algebra", "Topology", "Space", "Geometry", "Spacetime", "Commutative algebra" ]
60,138
https://en.wikipedia.org/wiki/Wilson%27s%20disease
Wilson's disease (also called hepatolenticular degeneration) is a genetic disorder characterized by the excess build-up of copper in the body. Symptoms are typically related to the brain and liver. Liver-related symptoms include vomiting, weakness, fluid build-up in the abdomen, swelling of the legs, yellowish skin, and itchiness. Brain-related symptoms include tremors, muscle stiffness, trouble in speaking, personality changes, anxiety, and psychosis. Wilson's disease is caused by a mutation in the Wilson disease protein (ATP7B) gene. This protein transports excess copper into bile, where it is excreted in waste products. The condition is autosomal recessive; for people to be affected, they must inherit a mutated copy of the gene from both parents. Diagnosis may be difficult and often involves a combination of blood tests, urine tests, and a liver biopsy. Genetic testing may be used to screen family members of those affected. Wilson's disease is typically treated with dietary changes and medication. Dietary changes involve eating a low-copper diet and not using copper cookware. Medications used include chelating agents, such as trientine and D-penicillamine, and zinc supplements. Complications of Wilson's disease can include liver failure and kidney problems. A liver transplant may be helpful to those for whom other treatments are not effective or if liver failure occurs. Wilson's disease occurs in about one in 30,000 people. Symptoms usually begin between the ages of 5 and 35 years. It was first described in 1854 by German pathologist Friedrich Theodor von Frerichs and is named after British neurologist Samuel Wilson. Signs and symptoms The main sites of copper accumulation are the liver and brain. Consequently, liver disease and neuropsychiatric symptoms are the main features that lead to diagnosis. People with liver problems tend to come for medical attention earlier (generally as children or teenagers) than those with neurological and psychiatric symptoms, who tend to be in their 20s or older. Some are identified only because relatives have been diagnosed with Wilson's disease; many of these, when tested, turn out to have been experiencing symptoms of the condition but have not received a diagnosis. Liver disease Liver disease may present itself as tiredness, jaundice, increased bleeding tendency or confusion (due to hepatic encephalopathy), and portal hypertension. The last, a condition in which the pressure in the portal vein is markedly increased, leads to esophageal varices (distended veins in the esophagus that may bleed in a life-threatening fashion) as well as enlargement of the spleen (splenomegaly) and accumulation of fluid in the abdominal cavity (ascites). On examination, signs of chronic liver disease such as spider angiomata (small distended blood vessels, usually on the chest) may be observed. Chronic active hepatitis has already caused cirrhosis of the liver in most patients by the time they develop symptoms. While most people with cirrhosis have an increased risk of hepatocellular carcinoma (liver cancer), this risk is relatively low in Wilson's disease. About 5% of all people are diagnosed only when they develop fulminant acute liver failure, often in the context of hemolytic anemia (anemia due to the destruction of red blood cells). This leads to abnormalities in protein production (identified by deranged coagulation) and metabolism by the liver. The deranged protein metabolism leads to the accumulation of waste products, such as ammonia, in the bloodstream. When these irritate the brain, patients develop hepatic encephalopathy – a serious condition that causes confusion, coma, seizures and, finally, life-threatening swelling of the brain). Neuropsychiatric symptoms About half of the people with Wilson's disease have neurological or psychiatric symptoms. Most initially have mild cognitive deterioration and clumsiness, as well as changes in behavior. Specific neurological symptoms usually then follow, often in the form of parkinsonism (cogwheel rigidity, bradykinesia, or slowed movements and a lack of balance are the most common parkinsonian features) with or without a typical hand tremor, masked facial expressions, slurred speech, ataxia (lack of coordination), or dystonia (twisting and repetitive movements of part of the body). Seizures and migraine appear to be more common in Wilson's disease. A characteristic tremor described as "wing-beating tremor" is encountered in many people with Wilson's; this is absent at rest but can be provoked by abducting the arms and flexing the elbows toward the midline. Cognition can also be affected in Wilson's disease, in two non-mutually exclusive categories: frontal lobe disorder (may present as impulsivity, impaired judgement, promiscuity, apathy, and executive dysfunction with poor planning and decision-making) and subcortical dementia (may present as slow thinking, memory loss, and executive dysfunction, without signs of aphasia, apraxia, or agnosia). These cognitive involvements are thought to be related and closely linked to psychiatric manifestations of the disease. Psychiatric problems due to Wilson's disease may include behavioral changes, depression, anxiety disorders, and psychosis. Psychiatric symptoms are commonly seen in conjunction with neurological symptoms and are rarely manifested on their own. These symptoms are often poorly defined and can sometimes be attributed to other causes. Because of this, diagnosis of Wilson's disease is rarely made when only psychiatric symptoms are present. Other organ systems Medical conditions have been linked with copper accumulation in Wilson's disease: Eyes: Kayser–Fleischer rings (KF rings) may be visible in the cornea of the eyes, either directly or on slit lamp examination, as deposits of copper form a ring around the cornea. This is due to copper deposition in Descemet's membrane. These rings can be either dark brown, golden, or reddish-green, are 1 to 3mm wide, and appear at the corneal limbus. They do not occur in all people with Wilson's disease, and may be seen in people with chronic cholestasis. Wilson's disease is also associated with sunflower cataracts exhibited by brown or green pigmentation of the anterior and posterior lens capsule. Neither causes significant visual loss. KF rings occur in approximately 66% of diagnosed cases (more often in those with neurological symptoms rather than with liver problems). Kidneys: renal tubular acidosis (Type 2), a disorder of bicarbonate handling by the proximal tubules leads to nephrocalcinosis (calcium accumulation in the kidneys), a weakening of bones (due to calcium and phosphate loss), and occasionally aminoaciduria (loss of essential amino acids needed for protein synthesis). Heart: cardiomyopathy (weakness of the heart muscle) is a rare but recognized problem in Wilson's disease; it may lead to heart failure (fluid accumulation due to decreased pump function) and cardiac arrhythmias (episodes of irregular and/or abnormally fast or slow heart beat). Hormones: hypoparathyroidism (failure of the parathyroid glands leading to low calcium levels), panhypopituitarism (leading to decreased production of hormones from the pituitary gland), infertility, and recurrent miscarriage. Musculoskeletal: Arthritis and thinning of the bones (osteopenia or osteoporosis). Genetics The Wilson's disease gene (ATP7B) is on chromosome 13 (13q14.3) and is expressed primarily in the liver, kidney, and placenta. The gene codes for a P-type (cation transport enzyme) ATPase that transports copper into bile and incorporates it into ceruloplasmin. Most people who have Wilson's disease – 60% – are homozygous for ATP7B mutations (two abnormal copies), and 30% of them have only one abnormal copy. In up to 7% of cases, people with Wilson's disease have no detectable mutations. Although more than 500 mutations of ATP7B have been described, a very small number of those cause most cases of Wilson's disease; which mutation an individual will have tends to be specific to the population they are part of. For instance, in Western populations, the H1069Q mutation (replacement of a histidine by a glutamine at position 1069 in the protein) is present in 37%–63% of cases, while in China this mutation is very uncommon; R778L (arginine to leucine at 778) is found more often there. Relatively little is known about the relative impact of the various mutations, although the H1069Q mutation seems to predict later onset and predominantly neurological problems, according to some studies. A comprehensive clinically annotated resource, WilsonGen, provides a clinical classification for the variants as per the recent ACMG & AMP guidelines. A normal variation in the PRNP gene can modify the course of the disease by delaying the age of onset and affecting the type of symptoms that develop. This gene produces prion protein, which is active in the brain and other tissues and also appears to be involved in transporting copper. A role for the ApoE gene was initially suspected, but could not be confirmed. The condition is inherited in an autosomal recessive pattern. To inherit it, both of the parents of an individual must carry an affected gene. Most people with Wilson's disease have no family history of the condition. People with only one abnormal gene are called carriers (heterozygotes) and may have mild, but medically insignificant, abnormalities of copper metabolism. There are several hereditary diseases that cause copper overload in the liver; Wilson's disease is the most common of them. All can cause cirrhosis at a young age. The other copper overload diseases are Indian childhood cirrhosis (ICC), endemic Tyrolean infantile cirrhosis, and idiopathic copper toxicosis. These three, unlike Wilson's disease, are not related to ATP7B mutations; for example, ICC has been linked to mutations in the KRT8 and the KRT18 genes. Pathophysiology Copper is needed by the body for a number of functions, predominantly as a cofactor for a number of enzymes such as ceruloplasmin, cytochrome c oxidase, dopamine β-hydroxylase, superoxide dismutase, and tyrosinase. Copper enters the body through the digestive tract. A transporter protein on the cells of the small bowel, copper membrane transporter 1 (Ctr1; SLC31A1), carries copper inside the cells, where some is bound to metallothionein and part is carried by ATOX1 to an organelle known as the trans-Golgi network. Here, in response to rising concentrations of copper, an enzyme called ATP7A (Menkes' protein) releases copper into the portal vein to the liver. Liver cells also carry the CMT1 protein, and metallothionein and ATOX1 bind it inside the cell, but here, ATP7B links copper to ceruloplasmin and releases it into the bloodstream, as well as removing excess copper by secreting it into bile. Both functions of ATP7B are impaired in Wilson's disease. Copper accumulates in the liver tissue; ceruloplasmin is still secreted, but in a form that lacks copper (termed apo-ceruloplasmin) and is rapidly degraded in the bloodstream. When the amount of copper in the liver overwhelms the proteins that normally bind it, it causes oxidative damage to the liver through a process known as Fenton chemistry; this damage eventually leads to chronic active hepatitis, fibrosis (deposition of connective tissue), and cirrhosis. The liver also releases copper into the bloodstream that is not bound to ceruloplasmin. This free copper precipitates throughout the body, but particularly in the kidneys, eyes, and brain. In the brain, most copper is deposited in the basal ganglia, particularly in the putamen and globus pallidus (together called the lenticular nucleus); these areas normally participate in the coordination of movement and play a significant role in neurocognitive processes such as the processing of stimuli and mood regulation. Damage to these areas, again by Fenton chemistry, produces the neuropsychiatric symptoms seen in Wilson's disease. Why Wilson's disease causes hemolysis is unclear, but various lines of evidence suggest that a high level of free (nonceruloplasmin-bound) copper may be directly affecting the oxidation of hemoglobin, or inhibiting the energy-supplying enzymes in red blood cells, or causing direct damage to cell membranes. Diagnosis Wilson's disease may be suspected on the basis of any of the symptoms mentioned above, or when a close relative has been found to have Wilson's. Most have slightly abnormal liver function tests such as raised aspartate transaminase, alanine transaminase, and bilirubin levels. If the liver damage is significant, albumin may be decreased due to an inability of damaged liver cells to produce this protein; likewise, the prothrombin time (a test of coagulation) may be prolonged as the liver is unable to produce proteins known as clotting factors. Alkaline phosphatase levels are relatively low in those with Wilson's-related acute liver failure. If neurological symptoms are seen, magnetic resonance imaging of the brain is usually performed; this shows hyperintensities in the part of the brain called the basal ganglia in the T2 setting. MRI may also demonstrate the characteristic "face of the giant panda" pattern. No totally reliable test for Wilson's disease is known, but levels of ceruloplasmin and copper in the blood, as well of the amount of copper excreted in urine during a 24-hour period, are together used to form an impression of the amount of copper in the body. The most accurate test is a liver biopsy. Ceruloplasmin Levels of ceruloplasmin are abnormally low (<0.2 g/L) in 80–95% of cases. It can be present at normal levels, though, in people with ongoing inflammation, as it is an acute phase protein. Low ceruloplasmin is also found in Menkes disease and aceruloplasminemia, which are related to, but much rarer than Wilson's disease. The combination of neurological symptoms, eye signs, and a low ceruloplasmin level is considered sufficient for the diagnosis of Wilson's disease. In many cases, however, further tests are needed. Serum and urine copper Serum copper is low, which may seem paradoxical given that Wilson's disease is a disease of copper excess. However, 95% of plasma copper is carried by ceruloplasmin, which is often low in Wilson's disease. Urine copper is elevated in Wilson's disease and is collected for 24 hours in a bottle with a copper-free liner. Levels above 100 μg/24h (1.6 μmol/24h) confirm Wilson's disease, and levels above 40 μg/24h (0.6 μmol/24h) are strongly indicative. High urine copper levels are not unique to Wilson's disease; they are sometimes observed in autoimmune hepatitis and in cholestasis (any disease obstructing the flow of bile from the liver to the small bowel). In children, the following penicillamine test may be used: a 500 mg oral dose of penicillamine is administered, and all urine collected for 24 hours. If the entire day's urine contains more than 1600 μg (25 μmol) of copper, it is a reliable indicator of Wilson's disease. This test has not been validated in adults. Slit-lamp examination The eyes of the patient are examined using a slit lamp to look for Kayser–Fleischer rings, which are strongly associated with Wilson's disease and are caused by copper deposition on the inner cornea in Descemet's membrane. Liver biopsy Once other investigations have indicated Wilson's disease, the ideal test is the removal of a small amount of liver tissue through a liver biopsy. This is assessed microscopically for the degree of steatosis and cirrhosis, and histochemistry and quantification of copper are used to measure the severity of the copper accumulation. A level of 250 μg of copper per gram of dried liver tissue confirms Wilson's disease. Occasionally, lower levels of copper are found; in that case, the combination of the biopsy findings with all other tests could still lead to a formal diagnosis of Wilson's. In the earlier stages of the disease, the biopsy typically shows steatosis (deposition of fatty material), increased glycogen in the nucleus, and areas of necrosis (cell death). In more advanced disease, the changes observed are quite similar to those seen in autoimmune hepatitis, such as infiltration by inflammatory cells, piecemeal necrosis, and fibrosis (scar tissue). In advanced disease, finally, cirrhosis is the main finding. In acute liver failure, degeneration of the liver cells and collapse of the liver tissue architecture is seen, typically on a background of cirrhotic changes. Histochemical methods for detecting copper are inconsistent and unreliable, and taken alone are regarded as insufficient to establish a diagnosis. Genetic testing Mutation analysis of the ATP7B gene, as well as other genes linked to copper accumulation in the liver, may be performed. Once a mutation is confirmed, family members can be screened for the disease as part of clinical genetics family counseling. Regional distributions of genes associated with Wilson's disease are important to follow, as this can help clinicians design appropriate screening strategies. Since mutations of the ATP7B gene vary between populations, research and genetic testing done in countries such as the USA or United Kingdom can pose problems, as they tend to have more mixed populations. Treatment Diet In general, a diet low in copper-containing foods is recommended. High-copper foods avoided in Wilson's disease include mushrooms, nuts, chocolate, dried fruit, liver, sesame seeds, sesame oil, and shellfish. Medication Medical treatments are available for Wilson's disease. Some increase the removal of copper from the body, while others prevent the absorption of copper from the diet. Generally, penicillamine is the first treatment used. This binds to copper (by chelation) and leads to excretion of copper in the urine. Hence, monitoring of the amount of copper in the urine can be done to ensure a sufficiently high dose is taken. Penicillamine is not without problems; about 20% experience a side effect or complication of penicillamine treatment, such as drug-induced lupus (causing joint pains and a skin rash) or myasthenia (a nerve condition leading to muscle weakness). In those who presented with neurological symptoms, almost half experience a paradoxical worsening in their symptoms. While this phenomenon is observed in other treatments for Wilson's, it is usually taken as an indication for discontinuing penicillamine and commencing second-line treatment. Those intolerant to penicillamine may instead be commenced on trientine hydrochloride, which also has chelating properties. Some recommend trientine as first-line treatment, but experience with penicillamine is more extensive. A further agent with known activity in Wilson's disease, under clinical investigation by Wilson Therapeutics, is tetrathiomolybdate. It is regarded as experimental, though some studies have shown a beneficial effect. Once all results have returned to normal, zinc (usually in the form of a zinc acetate prescription called Galzin) may be used instead of chelators to maintain stable copper levels in the body. Zinc stimulates metallothionein, a protein in gut cells that binds copper and prevents its absorption and transport to the liver. Zinc therapy is continued unless symptoms recur or if the urinary excretion of copper increases. In rare cases where none of the oral treatments is effective, especially with severe neurological disease, dimercaprol (British anti-Lewisite) is occasionally necessary. This treatment is injected intramuscularly (into a muscle) every few weeks and has unpleasant side effects such as pain. People who are asymptomatic (for instance, those diagnosed through family screening or only as a result of abnormal test results) are generally treated, as the copper accumulation may cause long-term damage in the future. Whether these people are best treated with penicillamine or zinc acetate is unclear. Physical and occupational therapies Physiotherapy and occupational therapy are beneficial for patients with the neurological form of the disease. The copper-chelating treatment may take up to six months to start working, and these therapies can assist in coping with ataxia, dystonia, and tremors, as well as preventing the development of contractures that can result from dystonia. Transplantation Liver transplantation is an effective cure for Wilson's disease, but is used only in particular scenarios because of the risks and complications associated with the procedure. It is used mainly in people with fulminant liver failure who fail to respond to medical treatment or in those with advanced chronic liver disease. Liver transplantation is avoided in severe neuropsychiatric illnesses, in which its benefit has not been demonstrated. Prognosis Left untreated, Wilson's disease tends to become progressively worse and is eventually fatal. Serious complications include liver cirrhosis, acute kidney failure, and psychosis. Liver cancer and cholangiocarcinoma may occur, but at a lower incidence than other chronic liver diseases, and the risk is greatly reduced with treatment. With early detection and treatment, most of those affected can live relatively normal lives and have a life expectancy close to that of the general population. Liver and neurological damage that occurs prior to treatment may improve, but it is often permanent. Fertility is usually normal and pregnancy complications are not increased in those with Wilson's disease that is treated. History The disease bears the name of British physician Samuel Alexander Kinnier Wilson (1878–1937), a neurologist who described the condition, including the pathological changes in the brain and liver, in 1912. Wilson's work had been predated by, and drew on, reports from German neurologist Karl Westphal (in 1883), who termed it "pseudo-sclerosis"; by the British neurologist William Gowers (in 1888); by the Finnish neuropathologist Ernst Alexander Homén (in 1889–1892), who noted the hereditary nature of the disease; and by Adolph Strümpell (in 1898), who noted hepatic cirrhosis. Neuropathologist John Nathaniel Cumings made the link with copper accumulation in both the liver and the brain in 1948. The occurrence of hemolysis was noted in 1967. In 1951, Cumings (in England), and New Zealand neurologist Derek Denny-Brown (working in the United States), simultaneously reported the first effective treatment, using the metal chelator British anti-Lewisite. This treatment had to be injected, but was one of the first therapies available in the field of neurology, a field that classically was able to observe and diagnose, but had few treatments to offer. The first oral chelation agent effective in Wilson's disease, penicillamine, was discovered in 1956 by British neurologist John Walshe. In 1982, Walshe also introduced trientine, and was the first to develop tetra-thiomolybdate for clinical use. Zinc acetate therapy initially made its appearance in the Netherlands, where physicians Schouwink and Hoogenraad used it in 1961 and in the 1970s, respectively, and was further developed later by Brewer and colleagues at the University of Michigan. The genetic basis of Wilson's disease, and its link to ATP7B mutations, was elucidated by several research groups in the 1980s and 1990s. In other animals Hereditary copper accumulation has been described in Bedlington Terriers, where it generally only affects the liver. In Bedlington Terriers it is due to mutations in the COMMD1 (or MURR1) gene. The discovery of these mutations in the dogs led researchers to examine the corresponding human genes, but COMMD1 mutations could not be detected in humans with non-Wilsonian copper accumulation states (such as Indian childhood cirrhosis). See also Copper in health References External links Neurological disorders Hepatology Diseases of liver Autosomal recessive disorders Rare diseases Articles containing video clips Wikipedia medicine articles ready to translate Copper in health Wikipedia neurology articles ready to translate
Wilson's disease
[ "Chemistry" ]
5,238
[ "Biology and pharmacology of chemical elements", "Copper in health" ]
60,144
https://en.wikipedia.org/wiki/Wireless%20community%20network
Wireless community networks or wireless community projects or simply community networks, are non-centralized, self-managed and collaborative networks organized in a grassroots fashion by communities, non-governmental organizations and cooperatives in order to provide a viable alternative to municipal wireless networks for consumers. Many of these organizations set up wireless mesh networks which rely primarily on sharing of unmetered residential and business DSL and cable Internet. This sort of usage might be non-compliant with the terms of service of local internet service provider (ISPs) that deliver their service via the consumer phone and cable duopoly. Wireless community networks sometimes advocate complete freedom from censorship, and this position may be at odds with the acceptable use policies of some commercial services used. Some ISPs do allow sharing or reselling of bandwidth. The First Latin American Summit of Community Networks, held in Argentina in 2018, presented the following definition for the term "community network": "Community networks are networks collectively owned and managed by the community for non-profit and community purposes. They are constituted by collectives, indigenous communities or non-profit civil society organizations that exercise their right to communicate, under the principles of democratic participation of their members, fairness, gender equality, diversity and plurality". According to the Declaration on Community Connectivity, elaborated through a multistakeholder process organized by the Internet Governance Forum's Dynamic Coalition on Community Connectivity, community networks are recognised by a list of characteristics: Collective ownership; Social management; Open design; Open participation; Promotion of peering and transit; Promotion of the consideration of security and privacy concerns while designing and operating the network; and promotion of the development and circulation of local content in local languages. History Wireless community networks started as projects that evolved from amateur radio using packet radio, and from the free software community which substantially overlapped with the amateur radio community. Wireless neighborhood networks were established by technology enthusiasts in the early 2000s. The Redbricks Intranet Collective (RIC) started 1999 in Manchester, UK, to allow about 30 flats in the Bentley House Estate to share the subscription cost of one leased line from British Telecom (BT). Wi-Fi was quickly adopted by technology enthusiasts and hobbyists, because it was an open standard and consumer Wi-Fi hardware was comparatively cheap. Wireless community networks started out by turning wireless access points designed for short-range use in homes into multi-kilometre long-range Wi-Fi by building high-gain directional antennas. Rather than buying commercially available units, some of the early groups advocated home-built antennas. Examples include the cantenna and RONJA, an optical link that can be made from a smoke flue and LEDs. The circuitry and instructions for such DIY networking antennas were released under the GNU Free Documentation License (GFDL). Municipal wireless networks, funded by local governments, started being deployed from 2003 onward. Regarding the international policy scenario, discussions on Community Networks have gained prominence over the last few years, especially since the creation of the Internet Governance Forum's Dynamic Coalition on Community Connectivity in 2016, providing "a much needed platform through which various individuals and entities interested in the advancement of CNs have the possibility to associate, organise and develop, in a bottom-up participatory fashion collective 'principles, rules, decision-making procedures and shared programs that give shape to the evolution and use of the Internet.'". Early community projects By 2003, a number of wireless community projects had established themselves in urban areas across North America, Europe and Australia. In June 2000, Melbourne Wireless Inc. was established in Melbourne Australia as a not-for-profit project to establish a metropolitan area wireless network using off-the-shelf 802.11 wireless equipment. By 2003, it had 1,200 hotspots. In 2000 Seattle Wireless was founded with the stated aim of providing free WiFi access and share the cost of Internet connectivity in Seattle, USA. By April 2011, it had 80 free wireless access points all over Seattle and was steadily growing. In August 2000, Consume was founded in London England as "collaborative strategy for the self provisioning of a broadband telecommunications infrastructure". Founded by Ben Laurie and others, Consume aimed to build a wireless infrastructure as alternative to the monopoly-held wired metropolitan area network. Besides providing Wi-Fi access in East London, Consume installed a large antenna on the roof of the former Greenwich Town Hall and documented the states of wireless connections in London. Consume created political pressure on municipal authorities, by staging public events, exhibitions, encouraging consumers to set up wireless equipment and setting up temporary Wi-Fi hotspots at events in East London. While Consume generated sustained media attention, it did not establish a lasting wireless community network. The Wireless Leiden hobbyist project was established in September 2001 and constituted as non-profit foundation in 2003 with more than 300 active users. The Wireless Leiden foundation aimed to facilitate the cooperation of local government, businesses and residents to provide wireless networking in Leiden Netherlands. The first wireless community network in Spain was RedLibre, founded in September 2001 in Madrid. By 2002 RedLibre coordinated the efforts of 15 local wireless groups and maintained free RedLibre Wi-Fi hotspots in five cities. RedLibre has been credited for facilitating the widespread availability of WLAN in the urban areas of Spain. In Italy, Ninux.org was founded by students and hackers in 2001 to create a grassroots wireless network in Rome, similar to Seattle Wireless. A turning point for Ninux was the lowering of prices in 2008 for consumer wireless equipment, such as antennas and routers. Ninux volunteers installed an increasing number of antennas on the roofs of Rome. The network served as example for other urban community wireless networks in Italy. By 2016, similar wireless networks had been installed in Florence, Bolongna, Pisa and Cosenza. While they share common technical and organizational frameworks, the working groups supporting these urban wireless community networks are driven by the different needs of the city in which they operate. Houston Wireless was founded in summer 2001 as the Houston Wireless Users Group. The telecommunications providers were slow to roll out third-generation wireless (3G), so Houston Wireless was established to promote high-speed wireless access across Houston and its suburbs. Houston Wireless experimented with network protocols such as IPsec, mobile IP and IPv6, as well as wireless technologies, including 802.11a, 802.11g and ultra-wideband (UWB). By 2003, it had 30 WLAN hotspots, 100 people on their mailing lists and their monthly meetings were attended by about 25 people. NYCwirelsss was established in New York City in May 2001 to provide public hotspots and promote the use of consumer owned unlicensed low-cost wireless networking equipment. In order to get more public Wi-Fi hotspots installed, NYCwirelsss contracted with the for-profit company Cloud Networks, which was staffed by some of the founding members of the NYCwireless community project. In the aftermath of the September 11 attacks in 2001 NYCwirelsss helped to provide emergency communication by quickly assembling and deploying free Wi-Fi hotspots in areas of New York City that had no other telecommunications. In summer 2002, the Bryant Park wireless network became the flagship project of NYCwireless, with about 50 users every day. By 2003 NYCwireless had more than 100 active hotspots throughout New York City. Early project in rural areas In 2000, guifi.net was founded because commercial internet service providers did not build a broadband Internet infrastructure in rural Catalonia. Guifi.net was conceived as a wireless mesh network, where households can become a node in the network by operating a radio transmitter. Not every node needs to be a wireless router, but the network relies on some volunteers being connected to the Internet and sharing that access with others. In 2017 guifi.net had 23,000 nodes and was described as the biggest mesh network in the world. In 2001, BCWireless founded to help communities in British Columbia, Canada, set up local Wi-Fi networks. BCWireless hobbyists experimented with IEEE 802.11b wireless networks and antennas to extend the range and power of signal, allow bandwidth sharing among local group members and establish wireless mesh networks. The Lac Seul First Nation communities set up their Wi-Fi network and constituted the non-profit K-Net to manage a wireless network based on IEEE 802.11g to provide the entire reserve with Wi-Fi using the unlicensed spectrum in combination with licensed spectrum at 3.5 GHz. Co-operation between community networks For the most, early wireless community projects had a local scope, but many still had a global awareness. In 2003, wireless community networks initiated the Pico Peering Agreement (PPA) and the Wireless Commons Manifesto. The two initiatives defined attempts to build an infrastructure, so that local wireless mesh networks could become extensive wireless ad hoc networks across local and national boundaries. In 2004, Freifunk released the OpenWrt-based firmware FFF for Wi-Fi devices that participate in a community network, which included a PPA, so that the owner of the node agrees to provide free transit across the network. Technical approach There are at least three technical approaches to building a wireless community network: Cluster: Advocacy groups which simply encourage sharing of unmetered internet bandwidth via Wi-Fi, may also index nodes, suggest uniform SSID (for low-quality roaming), supply equipment, DNS services, etc. Wireless mesh network: Technology groups which coordinate building a mesh network to provide Wi-Fi access to the internet Device-as-infrastructure: In 2013 the Open Technology Institute released the Commotion Wireless mesh network firmware, which allows Wi-Fi enabled mobile phones and computers to join a wireless community network by establishing a peer-to-peer network that still works when not connected to the wide area network. Firmware Wireless equipment, like many other consumer electronics, comes with hard-to-alter firmware that is preinstalled by the manufacturer. When the Linksys WRT54G series was launched in 2003 with an open source Linux kernel as firmware, it immediately became the subject of hacks and became the most popular hardware among community wireless volunteers. In 2005, Linksys released the WRT54GL version of its firmware, to make it even easier for customers to modify it. Community network hackers experimented with increasing the transmission power of the Linksys WRT54G or increasing the clock speed of the CPU to speed up data transmission. Hobbyists got another boost when in 2004 the OpenWrt firmware was released as open source alternative to proprietary firmware. The Linux-based embedded operating system could be used on embedded devices to route network traffic. Through successive versions, OpenWrt eventually could work on several hundred types of wireless devices and Wi-Fi routers. OpenWrt was named in honor of the WRT54G. The OpenWrt developers provided extensive documentation and the ability to include one's own code in the OpenWrt source code and compile the firmware. In 2004, Freifunk released the FFF firmware for wireless community projects, which modified OpenWrt so that the node could be configured via a web interface and added features to better support a wireless ad hoc network with traffic shaping, statistics, Internet gateway support and an implementation of the Optimized Link State Routing Protocol (OLSR). A Wi-Fi access point that booting the FFF firmware joined the network by automatically announcing its Internet gateway capabilities to other nodes using OLSR HNA4. When a node disappeared, the other nodes registered the change in the network topology through the discontinuation of HNA4 announcements. At the time, Freifunk in Berlin had 500 Wi-Fi access points and about 2,200 Berlin residents used the network free of charge. The Freifunk FFF firmware is among the oldest approaches to establishing a wireless mesh network at significant scale. Other early attempts at developing an operating system for wireless devices that supported large scale wireless community projects were Open-Mesh and Netsukuku. In 2006, Meraki Networks Inc was founded. The Meraki hardware and firmware had been developed as part of a PhD research project at the Massachusetts Institute of Technology to provide wireless access to graduate students. For years, the low-cost Meraki products fueled the growth of wireless mesh networks in 25 countries. Early Meraki-based wireless community networks included the Free-the-Net Meraki mesh in Vancouver, Canada. Constituted in 2006 as legal co-operative, members of the Vancouver Open Network Initiatives Cooperative paid five Canadian dollars per month to access the community wireless network provided by individuals who attached Meraki nodes to their home wireless connection, sharing bandwidth with any cooperative members nearby and participating in a meshed wireless network. Community network software By 2003, the Sidney Wireless community project had launched the NodeDB software, to facilitate the work of community networks by mapping the nodes participating in a wireless mesh network. Nodes needed to be registered in the database, but the software generated a list of adjacent nodes. When registering a node that participated in a community network, the maintainer of the node could leave a note on the hardware, antenna reach and firmware in operation and so find other network community members who were willing to participate in a mesh. Organization Organizationally, a wireless community network requires either a set of affordable commercial technical solutions or a critical mass of hobbyists willing to tinker to maintain operations. Mesh networks require that a high level of community participation and commitment be maintained for the network to be viable. The mesh approach currently requires uniform equipment. One market-driven aspect of the mesh approach is that users who receive a weak mesh signal can often convert it to a strong signal by obtaining and operating a repeater node, thus extending the network. Such volunteer organizations focusing on technology that is rapidly advancing sometimes have schisms and mergers. The Wi-Fi service provided by such groups is usually free and without the stigma of piggybacking. An alternative to the voluntary model is to use a co-operative structure. Business models Wireless community projects made volunteer bandwidth-sharing technically feasible and have been credited with contributing to the emergence of alternative business models in the consumer Wi-Fi market. The commercial Wi-Fi provider Fon was established in 2006 in Spain. Fon customers were equipped with a Linksys Wi-Fi access point that runs a modified OpenWrt firmware so that Fon customers shared Wi-Fi access among each other. Public Wi-Fi provisioning through FON customers was broadened when FON entered a 50% revenue-sharing agreement with customers who made their entire unused bandwidth available for resale. In 2009, this business model gained broader acceptance when British Telecom allowed its own home customers to sell unused bandwidth to BT and FON roamers. Wireless community projects for the most provide best-effort Wi-Fi coverage. However, since the mid-2000s local authorities started to contract with wireless community networks to provide municipal wireless networks or stable Wi-Fi access in a defined urban area, such as a park. Wireless community networks started to participate in a variety of public-private partnerships. The non-profit community network ZAP Sherbrooke has partnered with public and private entities to provide Wi-Fi access and received financial support from the University of Sherbrooke and Bishop's University to extend the coverage of its wireless mesh throughout the city of Sherbrooke, Canada. Regulation Certain countries regulate the selling of internet access, requiring a license to sell internet access over a wireless network. In South Africa it is regulated by the Independent Communications Authority of South Africa (ICASA). They require that WISP's apply for a VANS or ECNS/ECS license before being allowed to resell internet access over a wireless link. The Internet Society's publication "Community Networks in Latin America: Challenges, Regulations and Solutions" brings a summary of regulations regarding Community Networks among Latin American countries, the United States and Canada. See also AWMN Bryggenet Community Broadband Network Computer network DD-WRT List of wireless community networks by region Multiple-input multiple-output communications (MIMO) Neighborhood Internet service provider South African wireless community networks OpenWireless.org, a project by the Electronic Frontier Foundation (EFF) Roofnet Wireless LAN Security nodewatcher, an open-source node database project References Wireless Internet service providers
Wireless community network
[ "Technology" ]
3,342
[ "Wireless networking", "Wireless network organizations" ]
60,162
https://en.wikipedia.org/wiki/Tidal%20locking
Tidal locking between a pair of co-orbiting astronomical bodies occurs when one of the objects reaches a state where there is no longer any net change in its rotation rate over the course of a complete orbit. In the case where a tidally locked body possesses synchronous rotation, the object takes just as long to rotate around its own axis as it does to revolve around its partner. For example, the same side of the Moon always faces Earth, although there is some variability because the Moon's orbit is not perfectly circular. Usually, only the satellite is tidally locked to the larger body. However, if both the difference in mass between the two bodies and the distance between them are relatively small, each may be tidally locked to the other; this is the case for Pluto and Charon, and for Eris and Dysnomia. Alternative names for the tidal locking process are gravitational locking, captured rotation, and spin–orbit locking. The effect arises between two bodies when their gravitational interaction slows a body's rotation until it becomes tidally locked. Over many millions of years, the interaction forces changes to their orbits and rotation rates as a result of energy exchange and heat dissipation. When one of the bodies reaches a state where there is no longer any net change in its rotation rate over the course of a complete orbit, it is said to be tidally locked. The object tends to stay in this state because leaving it would require adding energy back into the system. The object's orbit may migrate over time so as to undo the tidal lock, for example, if a giant planet perturbs the object. There is ambiguity in the use of the terms 'tidally locked' and 'tidal locking', in that some scientific sources use it to refer exclusively to 1:1 synchronous rotation (e.g. the Moon), while others include non-synchronous orbital resonances in which there is no further transfer of angular momentum over the course of one orbit (e.g. Mercury). In Mercury's case, the planet completes three rotations for every two revolutions around the Sun, a 3:2 spin–orbit resonance. In the special case where an orbit is nearly circular and the body's rotation axis is not significantly tilted, such as the Moon, tidal locking results in the same hemisphere of the revolving object constantly facing its partner. Regardless of which definition of tidal locking is used, the hemisphere that is visible changes slightly due to variations in the locked body's orbital velocity and the inclination of its rotation axis over time. Mechanism Consider a pair of co-orbiting objects, A and B. The change in rotation rate necessary to tidally lock body B to the larger body A is caused by the torque applied by A's gravity on bulges it has induced on B by tidal forces. The gravitational force from object A upon B will vary with distance, being greatest at the nearest surface to A and least at the most distant. This creates a gravitational gradient across object B that will distort its equilibrium shape slightly. The body of object B will become elongated along the axis oriented toward A, and conversely, slightly reduced in dimension in directions orthogonal to this axis. The elongated distortions are known as tidal bulges. (For the solid Earth, these bulges can reach displacements of up to around .) When B is not yet tidally locked, the bulges travel over its surface due to orbital motions, with one of the two "high" tidal bulges traveling close to the point where body A is overhead. For large astronomical bodies that are nearly spherical due to self-gravitation, the tidal distortion produces a slightly prolate spheroid, i.e. an axially symmetric ellipsoid that is elongated along its major axis. Smaller bodies also experience distortion, but this distortion is less regular. The material of B exerts resistance to this periodic reshaping caused by the tidal force. In effect, some time is required to reshape B to the gravitational equilibrium shape, by which time the forming bulges have already been carried some distance away from the A–B axis by B's rotation. Seen from a vantage point in space, the points of maximum bulge extension are displaced from the axis oriented toward A. If B's rotation period is shorter than its orbital period, the bulges are carried forward of the axis oriented toward A in the direction of rotation, whereas if B's rotation period is longer, the bulges instead lag behind. Because the bulges are now displaced from the A–B axis, A's gravitational pull on the mass in them exerts a torque on B. The torque on the A-facing bulge acts to bring B's rotation in line with its orbital period, whereas the "back" bulge, which faces away from A, acts in the opposite sense. However, the bulge on the A-facing side is closer to A than the back bulge by a distance of approximately B's diameter, and so experiences a slightly stronger gravitational force and torque. The net resulting torque from both bulges, then, is always in the direction that acts to synchronize B's rotation with its orbital period, leading eventually to tidal locking. Orbital changes The angular momentum of the whole A–B system is conserved in this process, so that when B slows down and loses rotational angular momentum, its orbital angular momentum is boosted by a similar amount (there are also some smaller effects on A's rotation). This results in a raising of B's orbit about A in tandem with its rotational slowdown. For the other case where B starts off rotating too slowly, tidal locking both speeds up its rotation, and lowers its orbit. Locking of the larger body The tidal locking effect is also experienced by the larger body A, but at a slower rate because B's gravitational effect is weaker due to B's smaller mass. For example, Earth's rotation is gradually being slowed by the Moon, by an amount that becomes noticeable over geological time as revealed in the fossil record. Current estimations are that this (together with the tidal influence of the Sun) has helped lengthen the Earth day from about 6 hours to the current 24 hours (over about 4.5 billion years). Currently, atomic clocks show that Earth's day lengthens, on average, by about 2.3 milliseconds per century. Given enough time, this would create a mutual tidal locking between Earth and the Moon. The length of Earth's day would increase and the length of a lunar month would also increase. Earth's sidereal day would eventually have the same length as the Moon's orbital period, about 47 times the length of the Earth day at present. However, Earth is not expected to become tidally locked to the Moon before the Sun becomes a red giant and engulfs Earth and the Moon. For bodies of similar size the effect may be of comparable size for both, and both may become tidally locked to each other on a much shorter timescale. An example is the dwarf planet Pluto and its satellite Charon. They have already reached a state where Charon is visible from only one hemisphere of Pluto and vice versa. Eccentric orbits For orbits that do not have an eccentricity close to zero, the rotation rate tends to become locked with the orbital speed when the body is at periapsis, which is the point of strongest tidal interaction between the two objects. If the orbiting object has a companion, this third body can cause the rotation rate of the parent object to vary in an oscillatory manner. This interaction can also drive an increase in orbital eccentricity of the orbiting object around the primary – an effect known as eccentricity pumping. In some cases where the orbit is eccentric and the tidal effect is relatively weak, the smaller body may end up in a so-called spin–orbit resonance, rather than being tidally locked. Here, the ratio of the rotation period of a body to its own orbital period is some simple fraction different from 1:1. A well known case is the rotation of Mercury, which is locked to its own orbit around the Sun in a 3:2 resonance. This results in the rotation speed roughly matching the orbital speed around perihelion. Many exoplanets (especially the close-in ones) are expected to be in spin–orbit resonances higher than 1:1. A Mercury-like terrestrial planet can, for example, become captured in a 3:2, 2:1, or 5:2 spin–orbit resonance, with the probability of each being dependent on the orbital eccentricity. Occurrence Moons All twenty known moons in the Solar System that are large enough to be round are tidally locked with their primaries, because they orbit very closely and tidal force increases rapidly (as a cubic function) with decreasing distance. On the other hand, most of the irregular outer satellites of the giant planets (e.g. Phoebe), which orbit much farther away than the large well-known moons, are not tidally locked. Pluto and Charon are an extreme example of a tidal lock. Charon is a relatively large moon in comparison to its primary and also has a very close orbit. This results in Pluto and Charon being mutually tidally locked. Pluto's other moons are not tidally locked; Styx, Nix, Kerberos, and Hydra all rotate chaotically due to the influence of Charon. Similarly, and Dysnomia are mutually tidally locked. and Vanth might also be mutually tidally locked, but the data is not conclusive. The tidal locking situation for asteroid moons is largely unknown, but closely orbiting binaries are expected to be tidally locked, as well as contact binaries. Earth's Moon Earth's Moon's rotation and orbital periods are tidally locked with each other, so no matter when the Moon is observed from Earth, the same hemisphere of the Moon is always seen. Most of the far side of the Moon was not seen until 1959, when photographs of most of the far side were transmitted from the Soviet spacecraft Luna 3. When Earth is observed from the Moon, Earth does not appear to move across the sky. It remains in the same place while showing nearly all its surface as it rotates on its axis. Despite the Moon's rotational and orbital periods being exactly locked, about 59 percent of the Moon's total surface may be seen with repeated observations from Earth, due to the phenomena of libration and parallax. Librations are primarily caused by the Moon's varying orbital speed due to the eccentricity of its orbit: this allows up to about 6° more along its perimeter to be seen from Earth. Parallax is a geometric effect: at the surface of Earth observers are offset from the line through the centers of Earth and Moon; this accounts for about a 1° difference in the Moon's surface which can be seen around the sides of the Moon when comparing observations made during moonrise and moonset. Planets It was thought for some time that Mercury was in synchronous rotation with the Sun. This was because whenever Mercury was best placed for observation, the same side faced inward. Radar observations in 1965 demonstrated instead that Mercury has a 3:2 spin–orbit resonance, rotating three times for every two revolutions around the Sun, which results in the same positioning at those observation points. Modeling has demonstrated that Mercury was captured into the 3:2 spin–orbit state very early in its history, probably within 10–20 million years after its formation. The 583.92-day interval between successive close approaches of Venus to Earth is equal to 5.001444 Venusian solar days, making approximately the same face visible from Earth at each close approach. Whether this relationship arose by chance or is the result of some kind of tidal locking with Earth is unknown. The exoplanet Proxima Centauri b discovered in 2016 which orbits around Proxima Centauri, is almost certainly tidally locked, expressing either synchronized rotation or a 3:2 spin–orbit resonance like that of Mercury. One form of hypothetical tidally locked exoplanets are eyeball planets, which in turn are divided into "hot" and "cold" eyeball planets. Stars Close binary stars throughout the universe are expected to be tidally locked with each other, and extrasolar planets that have been found to orbit their primaries extremely closely are also thought to be tidally locked to them. An unusual example, confirmed by MOST, may be Tau Boötis, a star that is probably tidally locked by its planet Tau Boötis b. If so, the tidal locking is almost certainly mutual. Timescale An estimate of the time for a body to become tidally locked can be obtained using the following formula: where is the initial spin rate expressed in radians per second, is the semi-major axis of the motion of the satellite around the planet (given by the average of the periapsis and apoapsis distances), is the moment of inertia of the satellite, where is the mass of the satellite and is the mean radius of the satellite, is the dissipation function of the satellite, is the gravitational constant, is the mass of the planet (i.e., the object being orbited), and is the tidal Love number of the satellite. and are generally very poorly known except for the Moon, which has . For a really rough estimate it is common to take (perhaps conservatively, giving overestimated locking times), and where is the density of the satellite is the surface gravity of the satellite is the rigidity of the satellite. This can be roughly taken as 3 N/m2 for rocky objects and 4 N/m2 for icy ones. Even knowing the size and density of the satellite leaves many parameters that must be estimated (especially ω, Q, and μ), so that any calculated locking times obtained are expected to be inaccurate, even to factors of ten. Further, during the tidal locking phase the semi-major axis may have been significantly different from that observed nowadays due to subsequent tidal acceleration, and the locking time is extremely sensitive to this value. Because the uncertainty is so high, the above formulas can be simplified to give a somewhat less cumbersome one. By assuming that the satellite is spherical, , and it is sensible to guess one revolution every 12 hours in the initial non-locked state (most asteroids have rotational periods between about 2 hours and about 2 days) with masses in kilograms, distances in meters, and in newtons per meter squared; can be roughly taken as 3 N/m2 for rocky objects and 4 N/m2 for icy ones. There is an extremely strong dependence on semi-major axis . For the locking of a primary body to its satellite as in the case of Pluto, the satellite and primary body parameters can be swapped. One conclusion is that, other things being equal (such as and ), a large moon will lock faster than a smaller moon at the same orbital distance from the planet because grows as the cube of the satellite radius . A possible example of this is in the Saturn system, where Hyperion is not tidally locked, whereas the larger Iapetus, which orbits at a greater distance, is. However, this is not clear cut because Hyperion also experiences strong driving from the nearby Titan, which forces its rotation to be chaotic. The above formulae for the timescale of locking may be off by orders of magnitude, because they ignore the frequency dependence of . More importantly, they may be inapplicable to viscous binaries (double stars, or double asteroids that are rubble), because the spin–orbit dynamics of such bodies is defined mainly by their viscosity, not rigidity. List of known tidally locked bodies Solar System All the bodies below are tidally locked, and all but Mercury are moreover in synchronous rotation. (Mercury is tidally locked, but not in synchronous rotation.) Extra-solar The most successful detection methods of exoplanets (transits and radial velocities) suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone, which makes it difficult to estimate the true incidence of this phenomenon. Tau Boötis is known to be locked to the close-orbiting giant planet Tau Boötis b. Bodies likely to be locked Solar System Based on comparison between the likely time needed to lock a body to its primary, and the time it has been in its present orbit (comparable with the age of the Solar System for most planetary moons), a number of moons are thought to be locked. However their rotations are not known or not known enough. These are: Probably locked to Saturn Daphnis Aegaeon Methone Anthe Pallene Helene Polydeuces Probably locked to Uranus Cordelia Ophelia Bianca Cressida Desdemona Juliet Portia Rosalind Cupid Belinda Perdita Puck Mab Probably locked to Neptune Naiad Thalassa Despina Galatea Larissa Probably mutually tidally locked Orcus and Vanth Extrasolar Gliese 581c, Gliese 581g, Gliese 581b, and Gliese 581e may be tidally locked to their parent star Gliese 581. Gliese 581d is almost certainly captured either into the 2:1 or the 3:2 spin–orbit resonance with the same star. All planets in the TRAPPIST-1 system are likely to be tidally locked. See also Pseudo-synchronous rotation – a near synchronization of revolution and rotation at periastron References Celestial mechanics Orbits Locking
Tidal locking
[ "Physics" ]
3,641
[ "Celestial mechanics", "Classical mechanics", "Astrophysics" ]
60,167
https://en.wikipedia.org/wiki/Average
In ordinary language, an average is a single number or value that best represents a set of data. The type of average taken as most typically representative of a list of numbers is the arithmetic mean the sum of the numbers divided by how many numbers are in the list. For example, the mean average of the numbers 2, 3, 4, 7, and 9 (summing to 25) is 5. Depending on the context, the most representative statistic to be taken as the average might be another measure of central tendency, such as the mid-range, median, mode or geometric mean. For example, the average personal income is often given as the median the number below which are 50% of personal incomes and above which are 50% of personal incomes because the mean would be higher by including personal incomes from a few billionaires. General properties If all numbers in a list are the same number, then their average is also equal to this number. This property is shared by each of the many types of average. Another universal property is monotonicity: if two lists of numbers A and B have the same length, and each entry of list A is at least as large as the corresponding entry on list B, then the average of list A is at least that of list B. Also, all averages satisfy linear homogeneity: if all numbers of a list are multiplied by the same positive number, then its average changes by the same factor. In some types of average, the items in the list are assigned different weights before the average is determined. These include the weighted arithmetic mean, the weighted geometric mean and the weighted median. Also, for some types of moving average, the weight of an item depends on its position in the list. Most types of average, however, satisfy permutation-insensitivity: all items count equally in determining their average value and their positions in the list are irrelevant; the average of (1, 2, 3, 4, 6) is the same as that of (3, 2, 6, 4, 1). Pythagorean means The arithmetic mean, the geometric mean and the harmonic mean are known collectively as the Pythagorean means. Statistical location The mode, the median, and the mid-range are often used in addition to the mean as estimates of central tendency in descriptive statistics. These can all be seen as minimizing variation by some measure; see . Mode The most frequently occurring number in a list is called the mode. For example, the mode of the list (1, 2, 2, 3, 3, 3, 4) is 3. It may happen that there are two or more numbers which occur equally often and more often than any other number. In this case there is no agreed definition of mode. Some authors say they are all modes and some say there is no mode. Median The median is the middle number of the group when they are ranked in order. (If there are an even number of numbers, the mean of the middle two is taken.) Thus to find the median, order the list according to its elements' magnitude and then repeatedly remove the pair consisting of the highest and lowest values until either one or two values are left. If exactly one value is left, it is the median; if two values, the median is the arithmetic mean of these two. This method takes the list 1, 7, 3, 13 and orders it to read 1, 3, 7, 13. Then the 1 and 13 are removed to obtain the list 3, 7. Since there are two elements in this remaining list, the median is their arithmetic mean, (3 + 7)/2 = 5. Mid-range The mid-range is the arithmetic mean of the highest and lowest values of a set. Summary of types Even though perhaps not an average, the th quantile (another summary statistic that generalizes the median) can similarly be expressed as a solution to the optimization problem , which aims to minimize the total tilted absolute value loss (or quantile loss or pinball loss). The table of mathematical symbols explains the symbols used below. Miscellaneous types Other more sophisticated averages are: trimean, trimedian, and normalized mean, with their generalizations. One can create one's own average metric using the generalized f-mean: where f is any invertible function. The harmonic mean is an example of this using f(x) = 1/x, and the geometric mean is another, using f(x) = log x. However, this method for generating means is not general enough to capture all averages. A more general method for defining an average takes any function g(x1, x2, ..., xn) of a list of arguments that is continuous, strictly increasing in each argument, and symmetric (invariant under permutation of the arguments). The average y is then the value that, when replacing each member of the list, results in the same function value: . This most general definition still captures the important property of all averages that the average of a list of identical elements is that element itself. The function provides the arithmetic mean. The function (where the list elements are positive numbers) provides the geometric mean. The function (where the list elements are positive numbers) provides the harmonic mean. Average percentage return and CAGR A type of average used in finance is the average percentage return. It is an example of a geometric mean. When the returns are annual, it is called the Compound Annual Growth Rate (CAGR). For example, if we are considering a period of two years, and the investment return in the first year is −10% and the return in the second year is +60%, then the average percentage return or CAGR, R, can be obtained by solving the equation: . The value of R that makes this equation true is 0.2, or 20%. This means that the total return over the 2-year period is the same as if there had been 20% growth each year. The order of the years makes no difference – the average percentage returns of +60% and −10% is the same result as that for −10% and +60%. This method can be generalized to examples in which the periods are not equal. For example, consider a period of a half of a year for which the return is −23% and a period of two and a half years for which the return is +13%. The average percentage return for the combined period is the single year return, R, that is the solution of the following equation: , giving an average return R of 0.0600 or 6.00%. Moving average Given a time series, such as daily stock market prices or yearly temperatures, people often want to create a smoother series. This helps to show underlying trends or perhaps periodic behavior. An easy way to do this is the moving average: one chooses a number n and creates a new series by taking the arithmetic mean of the first n values, then moving forward one place by dropping the oldest value and introducing a new value at the other end of the list, and so on. This is the simplest form of moving average. More complicated forms involve using a weighted average. The weighting can be used to enhance or suppress various periodic behavior and there is very extensive analysis of what weightings to use in the literature on filtering. In digital signal processing the term "moving average" is used even when the sum of the weights is not 1.0 (so the output series is a scaled version of the averages). The reason for this is that the analyst is usually interested only in the trend or the periodic behavior. History Origin The first recorded time that the arithmetic mean was extended from 2 to n cases for the use of estimation was in the sixteenth century. From the late sixteenth century onwards, it gradually became a common method to use for reducing errors of measurement in various areas. At the time, astronomers wanted to know a real value from noisy measurement, such as the position of a planet or the diameter of the moon. Using the mean of several measured values, scientists assumed that the errors add up to a relatively small number when compared to the total of all measured values. The method of taking the mean for reducing observation errors was indeed mainly developed in astronomy. A possible precursor to the arithmetic mean is the mid-range (the mean of the two extreme values), used for example in Arabian astronomy of the ninth to eleventh centuries, but also in metallurgy and navigation. However, there are various older vague references to the use of the arithmetic mean (which are not as clear, but might reasonably have to do with our modern definition of the mean). In a text from the 4th century, it was written that (text in square brackets is a possible missing text that might clarify the meaning): In the first place, we must set out in a row the sequence of numbers from the monad up to nine: 1, 2, 3, 4, 5, 6, 7, 8, 9. Then we must add up the amount of all of them together, and since the row contains nine terms, we must look for the ninth part of the total to see if it is already naturally present among the numbers in the row; and we will find that the property of being [one] ninth [of the sum] only belongs to the [arithmetic] mean itself... Even older potential references exist. There are records that from about 700 BC, merchants and shippers agreed that damage to the cargo and ship (their "contribution" in case of damage by the sea) should be shared equally among themselves. This might have been calculated using the average, although there seem to be no direct record of the calculation. Etymology The root is found in Arabic as عوار ʿawār, a defect, or anything defective or damaged, including partially spoiled merchandise; and عواري ʿawārī (also عوارة ʿawāra) = "of or relating to ʿawār, a state of partial damage". Within the Western languages the word's history begins in medieval sea-commerce on the Mediterranean. 12th and 13th century Genoa Latin avaria meant "damage, loss and non-normal expenses arising in connection with a merchant sea voyage"; and the same meaning for avaria is in Marseille in 1210, Barcelona in 1258 and Florence in the late 13th. 15th-century French avarie had the same meaning, and it begot English "averay" (1491) and English "average" (1502) with the same meaning. Today, Italian avaria, Catalan avaria and French avarie still have the primary meaning of "damage". The huge transformation of the meaning in English began with the practice in later medieval and early modern Western merchant-marine law contracts under which if the ship met a bad storm and some of the goods had to be thrown overboard to make the ship lighter and safer, then all merchants whose goods were on the ship were to suffer proportionately (and not whoever's goods were thrown overboard); and more generally there was to be proportionate distribution of any avaria. From there the word was adopted by British insurers, creditors, and merchants for talking about their losses as being spread across their whole portfolio of assets and having a mean proportion. Today's meaning developed out of that, and started in the mid-18th century, and started in English. Marine damage is either particular average, which is borne only by the owner of the damaged property, or general average, where the owner can claim a proportional contribution from all the parties to the marine venture. The type of calculations used in adjusting general average gave rise to the use of "average" to mean "arithmetic mean". A second English usage, documented as early as 1674 and sometimes spelled "averish", is as the residue and second growth of field crops, which were considered suited to consumption by draught animals ("avers"). There is earlier (from at least the 11th century), unrelated use of the word. It appears to be an old legal term for a tenant's day labour obligation to a sheriff, probably anglicised from "avera" found in the English Domesday Book (1085). The Oxford English Dictionary, however, says that derivations from German hafen haven, and Arabic ʿawâr loss, damage, have been "quite disposed of" and the word has a Romance origin. Averages as a rhetorical tool Due to the aforementioned colloquial nature of the term "average", the term can be used to obfuscate the true meaning of data and suggest varying answers to questions based on the averaging method (most frequently arithmetic mean, median, or mode) used. In his article "Framed for Lying: Statistics as In/Artistic Proof", University of Pittsburgh faculty member Daniel Libertz comments that statistical information is frequently dismissed from rhetorical arguments for this reason. However, due to their persuasive power, averages and other statistical values should not be discarded completely, but instead used and interpreted with caution. Libertz invites us to engage critically not only with statistical information such as averages, but also with the language used to describe the data and its uses, saying: "If statistics rely on interpretation, rhetors should invite their audience to interpret rather than insist on an interpretation." In many cases, data and specific calculations are provided to help facilitate this audience-based interpretation. See also Average absolute deviation Central limit theorem Expected value Law of averages Population mean Sample mean Notes References External links Calculations and comparison between arithmetic and geometric mean of two values Arithmetic functions Means Summary statistics
Average
[ "Physics", "Mathematics" ]
2,794
[ "Means", "Mathematical analysis", "Point (geometry)", "Arithmetic functions", "Geometric centers", "Number theory", "Symmetry" ]
60,237
https://en.wikipedia.org/wiki/Intel%20430HX
The Intel 430HX (codenamed Triton II) is a chipset from Intel, supporting Socket 7 processors, including the Pentium and Pentium MMX. It is also known as i430HX and it was released in February 1996. The official part number is 82430HX. Features The 430HX chipset had all the features of the 430FX (Triton I) plus support for ECC, parity RAM, two-way SMP, USB, and then current PCI to improve speed. It consists of one 82439HX TXC, the northbridge and one PIIX3, the southbridge. The 430HX chipset supported up to 512MB of RAM (64MB or 512MB cacheable depending on tag RAM size). Gallery Limitations Not all 430HX boards allowed for tag RAM expansion, only allowing 64MB cacheable; 430HX also did not support the then-new SDRAM memory technology. Dual-voltage support, for Pentium MMX or AMD K6 CPUs, was also not mandatory on 430HX boards, requiring the use of an interposer to step down the voltage. See also List of Intel chipsets References , PC Guide. , PC Guide. Summary of P5 chipsets, comp.sys.intel, September 1996. 430HX
Intel 430HX
[ "Technology" ]
283
[ "Computing stubs", "Computer hardware stubs" ]
60,244
https://en.wikipedia.org/wiki/Dugong
The dugong (; Dugong dugon) is a marine mammal. It is one of four living species of the order Sirenia, which also includes three species of manatees. It is the only living representative of the once-diverse family Dugongidae; its closest modern relative, Steller's sea cow (Hydrodamalis gigas), was hunted to extinction in the 18th century. The dugong is the only sirenian in its range, which spans the waters of some 40 countries and territories throughout the Indo-West Pacific. The dugong is largely dependent on seagrass communities for subsistence and is thus restricted to the coastal habitats that support seagrass meadows, with the largest dugong concentrations typically occurring in wide, shallow, protected areas such as bays, mangrove channels, the waters of large inshore islands and inter-reefal waters. The northern waters of Australia between Shark Bay and Moreton Bay are believed to be the dugong's contemporary stronghold. Like all modern sirenians, the dugong has a fusiform body with no dorsal fin or hind limbs. The forelimbs or flippers are paddle-like. The dugong is easily distinguishable from the manatees by its fluked, dolphin-like tail; moreover, it possesses a unique skull and teeth. Its snout is sharply downturned, an adaptation for feeding in benthic seagrass communities. The molar teeth are simple and peg-like, unlike the more elaborate molar dentition of manatees. The dugong has been hunted for thousands of years for its meat and oil. Traditional hunting still has great cultural significance in several countries in its modern range, particularly northern Australia and the Pacific Islands. The dugong's current distribution is fragmented, and many populations are believed to be close to extinction. The IUCN lists the dugong as a species vulnerable to extinction, while the Convention on International Trade in Endangered Species limits or bans the trade of derived products. Despite being legally protected in many countries, the main causes of population decline remain anthropogenic and include fishing-related fatalities, habitat degradation, and hunting. With its long lifespan of 70 years or more and slow rate of reproduction, the dugong is especially vulnerable to extinction. Evolution Dugongs are part of the Sirenia order of placental mammals which comprises modern "sea cows" (manatees as well as dugongs) and their extinct relatives. Sirenia are the only extant herbivorous marine mammals and the only group of herbivorous mammals to have become completely aquatic. Sirenians are thought to have a 50-million-year-old fossil record (early Eocene-recent). They attained modest diversity during the Oligocene and Miocene but subsequently declined as a result of climatic cooling, oceanographic changes, and human interference. Etymology and taxonomy The word "dugong" derives from the Visayan (probably Cebuano) . The name was first adopted and popularized by the French naturalist Georges-Louis Leclerc, Comte de Buffon, as "dugon" in Histoire Naturelle (1765), after descriptions of the animal from the island of Leyte in the Philippines. The name ultimately derives from Proto-Malayo-Polynesian *duyuŋ. Despite common misconception, the term does not come from Malay duyung and it does not mean "lady of the sea" (Mermaid). Other common local names include "sea cow", "sea pig" and "sea camel". It is known as the balguja by the Wunambal people of the Mitchell Plateau area in the Kimberley, Western Australia. Dugong dugon is the only extant species of the family Dugongidae, and one of only four extant species of the Sirenia order, the others forming the manatee family. It was first classified by Müller in 1776 as Trichechus dugon, a member of the manatee genus previously defined by Linnaeus. It was later assigned as the type species of Dugong by Lacépède and further classified within its own family by Gray and subfamily by Simpson. Dugongs and other sirenians are not closely related to other marine mammals, being more related to elephants. Dugongs and elephants share a monophyletic group with hyraxes and the aardvark, one of the earliest offshoots of eutherians. The fossil record shows sirenians appearing in the Eocene, where they most likely lived in the Tethys Ocean. The two extant families of sirenians are thought to have diverged in the mid-Eocene, after which the dugongs and their closest relative, the Steller's sea cow, split off from a common ancestor in the Miocene. The Steller's sea cow became extinct in the 18th century. No fossils exist of other members of the Dugongidae. Molecular studies have been made on dugong populations using mitochondrial DNA. The results have suggested that the population of Southeast Asia is distinct from the others. Australia has two distinct maternal lineages, one of which also contains the dugongs from Africa and Arabia. Limited genetic mixing has taken place between those in Southeast Asia and those in Australia, mostly around Timor. One of the lineages stretches from Moreton Bay to Western Australia, while the other only stretches from Moreton Bay to the Northern Territory. There is not yet sufficient genetic data to make clear boundaries between distinct groups. Anatomy and morphology The dugong's body is large with a cylindrical shape that tapers at both ends. It has thick, smooth skin that is a pale cream colour at birth, but darkens dorsally and laterally to brownish-to-dark-grey with age. The colour of a dugong can change due to the growth of algae on the skin. The body is sparsely covered in short hair, a common feature among sirenians which may allow for tactile interpretation of their environment. These hairs are most developed around the mouth, which has a large horseshoe-shaped upper lip forming a highly mobile muzzle. This muscular upper lip aids the dugong in foraging. The dugong's tail flukes and flippers are similar to those of dolphins. These flukes are raised up and down in long strokes to move the animal forward and can be twisted to turn. The forelimbs are paddle-like flippers which aid in turning and slowing. The dugong lacks nails on its flippers, which are only 15% of a dugong's body length. The tail has deep notches. A dugong's brain weighs a maximum of , about 0.1% of the animal's body weight. With very small eyes, dugongs have limited vision, but acute hearing within narrow sound thresholds. Their ears, which lack pinnae, are located on the sides of their head. The nostrils are located on top of the head and can be closed using valves. Dugongs have two teats, one located behind each flipper. There are few differences between the sexes; the body structures are almost the same. A male's testes are not externally located, and the main difference between males and females is the location of the genital aperture to the umbilicus and the anus. The lungs in a dugong are very long, extending almost as far as the kidneys, which are also highly elongated to cope with the saltwater environment. If the dugong is wounded, its blood will clot rapidly. The skull of a dugong is unique. The skull is enlarged with a sharply down-turned premaxilla, which is stronger in males. The spine has between 57 and 60 vertebrae. Unlike in manatees, the dugong's teeth do not continually grow back via horizontal tooth replacement. The dugong has two incisors (tusks) which emerge in males during puberty. The female's tusks continue to grow without emerging during puberty, sometimes erupting later in life after reaching the base of the premaxilla. The number of growth layer groups in a tusk indicates the age of a dugong, and the cheek teeth move forward with age. The full dental formula of dugongs is , meaning they have two incisors, three premolars, and three molars on each side of their upper jaw, and three incisors, one canine, three premolars, and three molars on each side of their lower jaw. Like other sirenians, the dugong experiences pachyostosis, a condition in which the ribs and other long bones are unusually solid and contain little or no marrow. These heavy bones, which are among the densest in the animal kingdom, may act as a ballast to help keep sirenians suspended slightly below the water's surface. An adult's length rarely exceeds . An individual this long is expected to weigh around . Weight in adults is typically more than and less than . The largest individual recorded was long and weighed , and was found off the Saurashtra coast of west India. Females tend to be larger than males. Distribution and habitat Dugongs are found in warm coastal waters from the western Pacific Ocean to the eastern coast of Africa, along an estimated of coastline between 26° and 27° to the north and south of the equator. Their historic range is believed to correspond to that of seagrasses from the Potamogetonaceae and Hydrocharitaceae families. The full size of the former range is unknown, although it is believed that the current populations represent the historical limits of the range, which is highly fractured. Their distributions during warmer periods of Holocene might have been broader than today. Today populations of dugongs are found in the waters of 37 countries and territories. Recorded numbers of dugongs are generally believed to be lower than actual numbers, due to a lack of accurate surveys. Despite this, the dugong population is thought to be shrinking, with a worldwide decline of 20 percent in the last 90 years. They have disappeared from the waters of Hong Kong, Mauritius, and Taiwan, as well as parts of Cambodia, Japan, the Philippines, and Vietnam. Further disappearances are likely. Dugongs are generally found in warm waters around the coast with large numbers concentrated in wide and shallow protected bays. The dugong is the only strictly marine herbivorous mammal, as all species of manatee utilise fresh water to some degree. Nonetheless, they can tolerate the brackish waters found in coastal wetlands, and large numbers are also found in wide and shallow mangrove channels and around leeward sides of large inshore islands, where seagrass beds are common. They are usually located at a depth of around , although in areas where the continental shelf remains shallow dugongs have been known to travel more than from the shore, descending to as far as , where deepwater seagrasses such as Halophila spinulosa are found. Special habitats are used for different activities. It has been observed that shallow waters are used as sites for calving, minimizing the risk of predation. Deep waters may provide a thermal refuge from cooler waters closer to the shore during winter. Australia Australia is home to the largest population, stretching from Shark Bay in Western Australia to Moreton Bay in Queensland. The population of Shark Bay is thought to be stable with over 10,000 dugongs. Smaller populations exist up the coast, including one in Ashmore Reef. Large numbers of dugongs live to the north of the Northern Territory, with a population of over 20,000 in the Gulf of Carpentaria alone. A population of over 25,000 exists in the Torres Strait such as off Thursday Island, although there is significant migration between the strait and the waters of New Guinea. The Great Barrier Reef provides important feeding areas for the species; this reef area houses a stable population of around 10,000, although the population concentration has shifted over time. Large bays facing north on the Queensland coast provide significant habitats for dugong, with the southernmost of these being Hervey Bay and Moreton Bay. Dugongs had been occasional visitors along the Gold Coast where a re-establishment of a local population through range expansions has started recently. Persian Gulf The Persian Gulf has the second-largest dugong population in the world, inhabiting most of the southern coast, and the current population is believed to range from 5,800 to 7,300. In the course of a study carried out in 1986 and 1999 on the Persian Gulf, the largest reported group sighting was made of more than 600 individuals to the west of Qatar. A 2017 study found a nearly 25% drop in population since 1950. Reasons for this drastic population loss include illegal poaching, oil spills, and net entanglement. East Africa and South Asia In the late 1960s, herds of up to 500 dugongs were observed off the coast of East Africa and nearby islands. Current populations in this area are extremely small, numbering 50 and below, and it is thought likely they will become extinct. The eastern side of the Red Sea is home to large populations numbering in the hundreds, and similar populations are thought to exist on the western side. In the 1980s, it was estimated there could be as many as 4,000 dugongs in the Red Sea. Dugong populations in Madagascar are poorly studied, but due to widespread exploitation, it is thought they may have severely declined, with few surviving individuals. The resident population around Mayotte is thought to number just 10 individuals. In Mozambique, most of the remaining local populations are very small and the largest (about 120 individuals) occurs at Bazaruto Island, but they have become rare in historical habitats such as in Maputo Bay and on Inhaca Island. The Bazaruto Island population is possibly the last long-term viable population in East Africa, with only some of its core territory lying within protected waters. The East African population is genetically distinct from those of the Red Sea and those off Madagascar. In Tanzania, observations have recently increased around the Mafia Island Marine Park where a hunt was intended by fishermen but failed in 2009. In the Seychelles, dugongs had been regarded as extinct in the 18th century until a small number was discovered around the Aldabra Atoll. This population may belong to a different group than that distributed among the inner isles. Dugongs once thrived among the Chagos Archipelago and Sea Cow Island was named after the species, although the species no longer occurs in the region. There are less than 250 individuals scattered throughout Indian waters. A highly isolated breeding population exists in the Marine National Park, Gulf of Kutch, the only remaining population in western India. It is from the population in the Persian Gulf, and from the nearest population in India. Former populations in this area, centered on the Maldives and the Lakshadweep, are presumed to be extinct. A population exists in the Gulf of Mannar Marine National Park and the Palk Strait between India and Sri Lanka, but it is seriously depleted. Recoveries of seagrass beds along former ranges of dugongs, such as the Chilika Lake have been confirmed in recent years, raising hopes for re-colorizations of the species. The population around the Andaman and Nicobar Islands is known only from a few records, and although the population was large during British rule, it is now believed to be small and scattered. Southeast Asia and the West Pacific A small population existed along the southern coast of China, particularly the Gulf of Tonkin (Beibu Gulf), where efforts were made to protect it, including the establishment of a seagrass sanctuary for dugong and other endangered marine fauna ranging in Guangxi. Despite these efforts, numbers continued to decrease, and in 2007 it was reported that no more dugong could be found on the west coast of the island of Hainan. Historically, dugongs were also present in the southern parts of the Yellow Sea. The last confirmed record of dugongs in Chinese waters was documented in 2008. In August 2022, an article published on the Royal Society Open Science concluded that dugongs were functionally extinct in China, which was based on a large-scale interview survey conducted across four southern Chinese maritime provinces (Hainan, Guangxi, Guangdong, and Fujian) in the summer of 2019. In Vietnam, dugongs have been restricted mostly to the provinces of Kiên Giang and Bà Rịa–Vũng Tàu, including Phu Quoc Island and Con Dao Island, which hosted large populations in the past. Con Dao is now the only site in Vietnam where dugongs are regularly seen, protected within the Côn Đảo National Park. Nonetheless, dangerously low levels of attention to the conservation of marine organisms in Vietnam and Cambodia may result in increased intentional or unintentional catches, and illegal trade is a potential danger for local dugongs. On Phu Quoc, the first 'Dugong Festival' was held in 2014, aiming to raise awareness of these issues. In Thailand, the present distribution of dugongs is restricted to six provinces along the Andaman Sea, and very few dugongs are present in the Gulf of Thailand. The Gulf of Thailand was historically home to a large number of animals, but none have been sighted in the west of the gulf in recent years, and the remaining population in the east is thought to be very small and possibly declining. Dugongs are believed to exist in the Straits of Johor in very small numbers. The waters around Borneo support a small population, with more scattered throughout the Malay Archipelago. All the islands of the Philippines once provided habitats for sizeable herds of dugongs. They were common until the 1970s when their numbers declined sharply due to accidental drownings in fishing gear and habitat destruction of seagrass meadows. Today, only isolated populations survive, most notably in the waters of the Calamian Islands in Palawan, Isabela in Luzon, Guimaras, and Mindanao. The dugong became the first marine animal protected by Philippine law, with harsh penalties for harming them. Recently, the local marine trash problem in the archipelago remained unabated and became the biggest threat to the already dwindling population of Dugongs in the country. Litters of plastic waste (single-use sachets, plastic bottles, fast food to-go containers, etc.) and other non-biodegradable materials abound in the coastal areas. As these materials may be mistaken as food by dugongs, these may lead to death due to plastic ingestion. Overpopulation and lack of education of all coastal fisherfolk in the Philippines regarding marine trash are harming the coastal environment not only in Palawan but also across the islands of the Philippines. The first documented sighting in Sarangani Bay occurred in July 2024. Populations also exist around the Solomon Islands and New Caledonia, stretching to an easternmost population in Vanuatu. A highly isolated population lives around the islands of Palau. A single dugong lives at Cocos (Keeling) Islands although the animal is thought to be a vagrant. Northern Pacific Today, possibly the smallest and northernmost population of dugongs exists around the Ryukyu islands, and a population formerly existed off Taiwan. An endangered population of 50 or fewer dugongs, possibly as few as three individuals, survives around Okinawa. New sightings of a cow and calf have been reported in 2017, indicating a possible breeding had occurred in these waters. A single individual was recorded at Amami Ōshima, at the northernmost edge of the dugong's historic range, more than 40 years after the last previous recorded sighting. A vagrant strayed into a port near Ushibuka, Kumamoto, and died due to poor health. Historically, the Yaeyama Islands held a large concentration of dugongs, with more than 300 individuals. On the Aragusuku Islands, large quantities of skulls are preserved at a utaki that outsiders are strictly forbidden to enter. Dugong populations in these areas were reduced by historical hunts as payments to the Ryukyu Kingdom, before being wiped out because of large-scale illegal hunting and fishing using destructive methods such as dynamite fishing after the Second World War. Populations around Taiwan appear to be almost extinct, although remnant individuals may visit areas with rich seagrass beds such as Dongsha Atoll. Some of the last reported sightings were made in Kenting National Park in the 1950s and 60s. There had been occasional records of vagrants at the Northern Mariana Islands before 1985. It is unknown how much mixing there was between these populations historically. Some theorize that populations existed independently, for example, that the Okinawan population was isolated members derived from the migration of a Philippine subspecies. Others postulate that the populations formed part of a super-population where migration between Ryukyu, Taiwan, and the Philippines was common. Extinct Mediterranean population It has been confirmed that dugongs once inhabited the water of the Mediterranean possibly until after the rise of civilizations along the inland sea. This population possibly shared ancestry with the Red Sea population, and the Mediterranean population had never been large due to geographical factors and climate changes. The Mediterranean is the region where the Dugongidae originated in the mid-late Eocene, along with Caribbean Sea. Ecology and life history Dugongs are long-lived, and the oldest recorded specimen reached age 73. They have few natural predators, although animals such as crocodiles, killer whales, and sharks pose a threat to the young, and a dugong has also been recorded to have died from trauma after being impaled by a stingray barb. A large number of infections and parasitic diseases affect dugongs. Detected pathogens include helminths, cryptosporidium, different types of bacterial infections, and other unidentified parasites. 30% of dugong deaths in Queensland since 1996 are thought to be because of disease. Although they are social animals, they are usually solitary or found in pairs due to the inability of seagrass beds to support large populations. Gatherings of hundreds of dugongs sometimes happen, but they last only for a short time. Because they are shy and do not approach humans, little is known about dugong behavior. They can go six minutes without breathing (though about two and a half minutes is more typical), and have been known to rest on their tails to breathe with their heads above water. They can dive to a maximum depth of ; they spend most of their lives no deeper than . Communication between individuals is through chirps, whistles, barks, and other sounds that echo underwater. Different sounds have been observed with different amplitudes and frequencies, implying different purposes. Visual communication is limited due to poor eyesight and is mainly used for activities such as lekking for courtship purposes. Mothers and calves are in almost constant physical contact, and calves have been known to reach out and touch their mothers with their flippers for reassurance. Dugongs are semi-nomadic, often traveling long distances in search of food, but staying within a certain range their entire lives. Large numbers often move together from one area to another. It is thought that these movements are caused by changes in seagrass availability. Their memory allows them to return to specific points after long travels. Dugong movements mostly occur within a localized area of seagrass beds, and animals in the same region show individualistic patterns of movement. Daily movement is affected by the tides. In areas where there is a large tidal range, dugongs travel with the tide to access shallower feeding areas. In Moreton Bay, dugongs often travel between foraging grounds inside the bay and warmer oceanic waters. At higher latitudes dugongs make seasonal travels to reach warmer water during the winter. Occasionally individual dugongs make long-distance travels over many days and can travel over deep ocean waters. One animal was seen as far south as Sydney. Although they are marine creatures, dugongs have been known to travel up creeks, and in one case a dugong was caught up a creek near Cooktown. Feeding Dugongs, along with other sirenians, are referred to as "sea cows" because their diet consists mainly of seagrass, particularly the genera Halophila and Halodule. When eating they ingest the whole plant, including the roots, although when this is impossible they will feed on just the leaves. A wide variety of seagrass has been found in dugong stomach contents, and evidence exists they will eat algae when seagrass is scarce. Although almost completely herbivorous, they will occasionally eat invertebrates such as jellyfish, sea squirts, and shellfish. Dugongs in Moreton Bay, Australia, are omnivorous, feeding on invertebrates such as polychaetes or marine algae when the supply of their choice grasses decreases. In other southern areas of both western and eastern Australia, there is evidence that dugongs actively seek out large invertebrates. This does not apply to dugongs in tropical areas, in which fecal evidence indicates that invertebrates are not eaten. Most dugongs do not feed on lush areas, but where the seagrass is more sparse. Additional factors such as protein concentration and regenerative ability also affect the value of a seagrass bed. The chemical structure and composition of the seagrass are important, and the grass species most often eaten are low in fiber, high in nitrogen, and easily digestible. In the Great Barrier Reef, dugongs feed on low-fiber high-nitrogen seagrass such as Halophila and Halodule, to maximize nutrient intake instead of bulk eating. Seagrasses of a lower seral are preferred, where the area has not fully vegetated. Only certain seagrass meadows are suitable for dugong consumption, due to the dugong's highly specialized diet. There is evidence that dugongs actively alter seagrass species compositions at local levels. Dugongs may search out deeper seagrass. Feeding trails have been observed as deep as , and dugongs have been seen feeding as deep as . Dugongs are relatively slow-moving, swimming at around . When moving along the seabed to feed they walk on their pectoral fins. Dugong feeding may favor the subsequent growth of low-fibre, high-nitrogen seagrasses such as Halophilia and Halodule. Species such as Zosteria capricorni are more dominant in established seagrass beds, but grow slowly, while Halophilia and Halodule grow quickly in the open space left by dugong feeding. This behavior is known as cultivation grazing and favors the rapidly growing, higher nutrient seagrasses that dugongs prefer. Dugongs may also prefer to feed on younger, less fibrous strands of seagrasses, and cycles of cultivation feeding at different seagrass meadows may provide them with a greater number of younger plants. Due to their poor eyesight, dugongs often use smell to locate edible plants. They also have a strong tactile sense and feel their surroundings with their long sensitive bristles. They will dig up an entire plant and then shake it to remove the sand before eating it. They have been known to collect a pile of plants in one area before eating them. The flexible and muscular upper lip is used to dig out the plants. This leaves furrows in the sand in their path. Reproduction and parental care A dugong reaches sexual maturity between the ages of eight and eighteen, older than in most other mammals. The way that females know how a male has reached sexual maturity is by the eruption of tusks in the male since tusks erupt in males when testosterone levels reach a high enough level. The age when a female first gives birth is disputed, with some studies placing the age between ten and seventeen years, while others place it as early as six years. There is evidence that male dugongs lose fertility at older ages. Despite the longevity of the dugong, which may live for 50 years or more, females give birth only a few times during their lives and invest considerable parental care in their young. The time between births is unclear, with estimates ranging from 2.4 to 7 years. Mating behaviour varies between populations located in different areas. In some populations, males will establish a territory that females in estrus will visit. In these areas, a male will try to impress the females while defending the area from other males, a practice known as lekking. In other areas many males will attempt to mate with the same female, sometimes inflicting injuries to the female or each other. During this, the female will have copulated with multiple males, who will have fought to mount her from below. This greatly increases the chances of conception. Females give birth after a 13- to 15-month gestation, usually to just one calf. Birth occurs in very shallow water, with occasions known where the mothers were almost on the shore. As soon as the young is born the mother pushes it to the surface to take a breath. Newborns are already long and weigh around . Once born, they stay close to their mothers, possibly to make swimming easier. The calf nurses for 14–18 months, although it begins to eat seagrasses soon after birth. A calf will only leave its mother once it has matured. Importance to humans Dugongs have historically provided easy targets for hunters, who killed them for their meat, oil, skin, and bones. As the anthropologist A. Asbjørn Jøn has noted, they are often considered the inspiration for mermaids, and people around the world developed cultures around dugong hunting. In some areas, it remains an animal of great significance, and a growing ecotourism industry around dugongs has had an economic benefit in some countries. There is a 5,000-year-old wall painting of a dugong, apparently drawn by Neolithic peoples, in Tambun Cave, Ipoh, Malaysia. This was discovered by Lieutenant R.L. Rawlings in 1959 while on a routine patrol. Dugongs feature in Southeast Asian, especially Austronesian, folklore. In languages like Ilocano, Mapun, Yakan, Tausug, and Kadazan Dusun of the Philippines and Sabah, the name for dugongs is a synonym for "mermaid". In Malay, they are sometimes referred to as perempoen laut ("woman of the sea") or putri duyong ("dugong princess"), leading to the misconception that the word "dugong" itself means "lady of the sea". A common belief found in the Philippines, Malaysia, Indonesia, and Thailand, is that dugongs were originally human or part-human (usually women), and that they cry when they are butchered or beached. Because of this, it is considered bad luck if a dugong is killed or accidentally dies in nets or fish corrals in the Philippines, some parts of Sabah (Malaysia), and northern Sulawesi and the Lesser Sunda Islands (Indonesia). Dugongs are predominantly not traditionally hunted for food in these regions and they remained plentiful until around the 1970s. Conversely, dugong "tears" are considered aphrodisiacs in other parts of Indonesia, Singapore, Malaysia, Brunei, Thailand, Vietnam, and Cambodia. Dugong meat is considered a luxury food and is also believed to have aphrodisiac properties. They are actively hunted in these regions, in some places to near-extinction. In Palau, dugongs were traditionally hunted with heavy spears from canoes. Although it is illegal and there is widespread disapproval of killing dugongs, poaching remains a major problem. Dugongs are also widely hunted in Papua New Guinea, the Solomon Islands, Vanuatu, and New Caledonia; where their meat and ornaments made from bones and tusks are highly prized in feasts and traditional rituals. However, hunting dugongs is considered taboo in some areas of Vanuatu. Dugong meat and oil have traditionally been some of the most valuable foods of Australian Aboriginals and Torres Strait Islanders. Some Aboriginals regard dugongs as part of their Aboriginality. Local fishermen in Southern China traditionally revered dugongs and regarded them as "miraculous fish". They believed it was bad luck to catch them and they were plentiful in the region before the 1960s. Beginning in the 1950s, a wave of immigrants from other regions that do not hold these beliefs resulted in dugongs being hunted for food and traditional Chinese medicine. This led to a steep decline in dugong populations in the Gulf of Tonkin and the sea around Hainan Island. In Japan, dugongs have been traditionally hunted in the Ryukyu Islands since prehistoric times. Carved ribs of dugongs in the shape of butterflies (a psychopomp) are found throughout Okinawa. They were commonly hunted throughout Japan up until around the 1970s. Dugongs have also played a role in legends in Kenya, and the animal is known there as the "Queen of the Sea". Body parts are used as food, medicine, and decorations. In the Gulf states, dugongs served not only as a source of food but their tusks were used as sword handles. Dugong oil is important as a preservative and conditioner for wooden boats to people around the Gulf of Kutch in India, who also believe the meat to be an aphrodisiac. Conservation Dugong numbers have decreased in recent times. For a population to remain stable, the mortality of adults cannot exceed 5% annually. The estimated percentage of females humans can kill without depleting the population is 1–2%. This number is reduced in areas where calving is minimal due to food shortages. Even in the best conditions, a population is unlikely to increase more than 5% a year, leaving dugongs vulnerable to over-exploitation. The fact that they live in shallow waters puts them under great pressure from human activity. Research on dugongs and the effects of human activity on them has been limited, mostly taking place in Australia. In many countries, dugong numbers have never been surveyed. As such, trends are uncertain, with more data needed for comprehensive management. The only data stretching back far enough to mention population trends comes from the urban coast of Queensland, Australia. The last major worldwide study, made in 2002, concluded that the dugong was declining and possibly extinct in a third of its range, with unknown status in another half. The IUCN Red List lists the dugong as vulnerable, and the Convention on International Trade in Endangered Species of Wild Fauna and Flora regulates and in some areas has banned international trade. Most dugong habitats fall within proposed important marine mammal areas. Regional cooperation is important due to the widespread distribution of the animal, and in 1998 there was strong support for Southeast Asian cooperation to protect dugongs. Kenya has passed legislation banning the hunting of dugongs and restricting trawling, but the dugong is not yet listed under Kenya's Wildlife Act as an endangered species. Mozambique has had legislation to protect dugongs since 1955, but this has not been effectively enforced. France has a National Action Plan covering the species, implemented within the Mayotte Marine Natural Park. Many marine parks have been established on the African coast of the Red Sea, and the Egyptian Gulf of Aqaba is fully protected. The United Arab Emirates has banned all hunting of dugongs within its waters, as has Bahrain. The UAE has additionally banned drift net fishing, and has declared an intention to restore coastal ecosystems dugongs rely on. India and Sri Lanka ban the hunting and selling of dugongs and their products. Japan has listed dugongs as endangered and has banned intentional killing and harassment. Hunting, catching, and harassment are banned by the People's Republic of China. The first marine mammal to be protected in the Philippines was the dugong, although monitoring this is difficult. Palau has legislated to protect dugongs, although this is not well enforced and poaching persists. Indonesia listed dugongs as a protected species in 1999, and in 2018 the Fisheries Ministry began implementing a conservation plan. Protection is not always enforced and souvenir products made from dugong parts can be openly found in markets in Bali. Traditional dugong hunters continued to hunt for many years, and some have struggled to find alternative incomes after ceasing. The dugong is a national animal of Papua New Guinea, which bans all except traditional hunting. Vanuatu and New Caledonia ban the hunting of dugongs. Dugongs are protected throughout Australia, although the rules vary by state; in some areas, indigenous hunting is allowed. Dugongs are listed under the Nature Conservation Act in the Australian state of Queensland as vulnerable. Most currently live in established marine parks, where boats must travel at a restricted speed and mesh net fishing is restricted. The World Wide Fund for Nature has purchased gillnet licences in northern Queensland to reduce the impact of fishing. In Vietnam, an illegal network targeting dugongs had been detected and was shut down in 2012. Potential hunts along Tanzanian coasts by fishermen have raised concerns as well. Human activity Despite being legally protected in many countries, the main causes of population decline remain anthropogenic and include hunting, habitat degradation, and fishing-related fatalities. Entanglement in fishing nets has caused many deaths, although there are no precise statistics. Most issues with industrial fishing occur in deeper waters where dugong populations are low, with local fishing being the main risk in shallower waters. As dugongs cannot stay underwater for a very long period, they are highly prone to death due to entanglement. The use of shark nets has historically caused large numbers of deaths, and they have been eliminated in most areas and replaced with baited hooks. Hunting has historically been a problem too, although in most areas they are no longer hunted, except in certain indigenous communities. In areas such as northern Australia, hunting has the greatest impact on the dugong population. Vessel strikes have proved a problem for manatees, but the relevance of this to dugongs is unknown. Increasing boat traffic has increased danger, especially in shallow waters. Ecotourism has increased in some countries, although the effects remain undocumented. It has been seen to cause issues in areas such as Hainan due to environmental degradation. Modern farming practices and increased land clearing have also had an impact, and much of the coastline of dugong habitats is undergoing industrialization, with increasing human populations. Dugongs accumulate heavy metal ions in their tissues throughout their lives, more so than other marine mammals. The effects are unknown. While international cooperation to form a conservative unit has been undertaken, socio-political needs are an impediment to dugong conservation in many developing countries. The shallow waters are often used as a source of food and income, problems exacerbated by aid used to improve fishing. In many countries, legislation does not exist to protect dugongs, and if it does it is not enforced. Oil spills are a danger to dugongs in some areas, as is land reclamation. In Okinawa, the small dugong population is threatened by United States military activity. Plans exist to build a military base close to the Henoko reef, and military activity also adds the threats of noise pollution, chemical pollution, soil erosion, and exposure to depleted uranium. The military base plans have been fought in US courts by some Okinawans, whose concerns include the impact on the local environment and dugong habitats. It was later revealed that the government of Japan was hiding evidence of the negative effects of ship lanes and human activities on dugongs observed during surveys carried out off Henoko reef. One of the three individuals has not been observed since June 2015, corresponding to the start of the excavation operations. Environmental degradation If dugongs do not get enough to eat they may calve later and produce fewer young. Food shortages can be caused by many factors, such as a loss of habitat, death and decline in the quality of seagrass, and a disturbance of feeding caused by human activity. Sewage, detergents, heavy metals, hypersaline water, herbicides, and other waste products all negatively affect seagrass meadows. Human activity such as mining, trawling, dredging, land reclamation, and boat propeller scarring also cause an increase in sedimentation which smothers seagrass and prevents light from reaching it. This is the most significant negative factor affecting seagrass. Halophila ovalis—one of the dugong's preferred species of seagrass—declines rapidly due to lack of light, dying completely after 30 days. Extreme weather such as cyclones and floods can destroy hundreds of square kilometres of seagrass meadows, as well as wash dugongs ashore. The recovery of seagrass meadows and the spread of seagrass into new areas, or areas where it has been destroyed, can take over a decade. Most measures for protection involve restricting activities such as trawling in areas containing seagrass meadows, with little to no action on pollutants originating from land. In some areas, water salinity is increased due to wastewater, and it is unknown how much salinity seagrass can withstand. Dugong habitat in the Oura Bay area of Henoko, Okinawa, Japan, is currently under threat from land reclamation conducted by the Japanese Government in order to build a US Marine base in the area. In August 2014, preliminary drilling surveys were conducted around the seagrass beds there. The construction is expected to seriously damage the dugong population's habitat, possibly leading to local extinction. Capture and captivity The Australian state of Queensland has sixteen dugong protection parks, and some preservation zones have been established where even Aboriginal Peoples are not allowed to hunt. Capturing animals for research has caused only one or two deaths; dugongs are expensive to keep in captivity due to the long time mothers and calves spend together, and the inability to grow the seagrass that dugongs eat in an aquarium. Only one orphaned calf has ever been successfully kept in captivity. Worldwide, only three dugongs are held in captivity. A female from the Philippines lives at Toba Aquarium in Toba, Mie, Japan. A male also lived there until he died on 10 February 2011. The second resides in Sea World Indonesia, after having been rescued from a fisherman's net and treated. The last one, a male, is kept at Sydney Aquarium, where he has resided since he was a juvenile. Sydney Aquarium had a second dugong for many years, until she died in 2018. Gracie, a captive dugong at Underwater World, Singapore, was reported to have died in 2014 at the age of 19, from complications arising from an acute digestive disorder. References External links Dugongidae EDGE species Extinct animals of Taiwan Fauna of the Indian Ocean Mammals described in 1776 Marine fauna of East Africa Marine fauna of Oceania Marine fauna of South Asia Marine fauna of Southeast Asia Taxa named by Philipp Ludwig Statius Müller Vulnerable animals Vulnerable biota of Africa Vulnerable fauna of Asia Vulnerable fauna of Australia
Dugong
[ "Biology" ]
8,761
[ "EDGE species", "Biodiversity" ]
60,258
https://en.wikipedia.org/wiki/Sea%20lion
Sea lions are pinnipeds characterized by external ear flaps, long foreflippers, the ability to walk on all fours, short and thick hair, and a big chest and belly. Together with the fur seals, they make up the family Otariidae, eared seals. The sea lions have six extant and one extinct species (the Japanese sea lion) in five genera. Their range extends from the subarctic to tropical waters of the global ocean in both the Northern and Southern Hemispheres, with the notable exception of the northern Atlantic Ocean. They have an average lifespan of 20–30 years. A male California sea lion weighs on average about and is about long, while the female sea lion weighs and is long. The largest sea lions are Steller's sea lions, which can weigh and grow to a length of . Sea lions consume large quantities of food at a time and are known to eat about 5–8% of their body weight (about ) at a single feeding. Sea lions can move around in water and at their fastest they can reach a speed of about . Three species, the Australian sea lion, the Galápagos sea lion and the New Zealand sea lion, are listed as endangered. Taxonomy Sea lions are related to walruses and seals. Together with the fur seals, they constitute the family Otariidae, collectively known as eared seals. Until recently, sea lions were grouped under a single subfamily called Otariinae, whereas fur seals were grouped in the subfamily Arcocephalinae. This division was based on the most prominent common feature shared by the fur seals and absent in the sea lions, namely the dense underfur characteristic of the former. Recent genetic evidence, suggests Callorhinus, the genus of the northern fur seal, is more closely related to some sea lion species than to the other fur seal genus, Arctocephalus. Therefore, the fur seal/sea lion subfamily distinction has been eliminated from many taxonomies. Nonetheless, all fur seals have certain features in common: the fur, generally smaller sizes, farther and longer foraging trips, smaller and more abundant prey items, and greater sexual dimorphism. All sea lions have certain features in common, in particular their coarse, short fur, greater bulk, and larger prey than fur seals. For these reasons, the distinction remains useful. The family Otariidae (Order Carnivora) contains the 15 extant species of fur seals and sea lions. Traditional classification of the family into the subfamilies Arctocephalinae (fur seals) and Otariinae (sea lions) is not supported, with the fur seal Callorhinus ursinus having a basal relationship relative to the rest of the family. This is consistent with the fossil record which suggests that this genus diverged from the line leading to the remaining fur seals and sea lions about 6 million years ago (mya). Similar genetic divergences between the sea lion clades as well as between the major Arctocephalus fur seal clades, suggest that these groups underwent periods of rapid radiation at about the time they diverged from each other. The phylogenetic relationships within the family and the genetic distances among some taxa highlight inconsistencies in the current taxonomic classification of the family. Arctocephalus is characterized by ancestral character states such as dense underfur and the presence of double rooted cheek teeth and is thus thought to represent the most "primitive" line. It was from this basal line that both the sea lions and the remaining fur seal genus, Callorhinus, are thought to have diverged. The fossil record from the western coast of North America presents evidence for the divergence of Callorhinus about 6 mya, whereas fossils in both California and Japan suggest that sea lions did not diverge until years later. Suborder Caniformia Family Otariidae Subfamily Arctocephalinae Genus Arctocephalus (southern fur seal; eight species) Genus Callorhinus (northern fur seal; one species) Subfamily Otariinae Genus Eumetopias Steller's sea lion, E. jubatus Genus Neophoca Australian sea lion, N. cinerea Genus Otaria South American sea lion, O. flavescens Genus Phocarctos New Zealand sea lion or Hooker's sea lion, P. hookeri Genus Zalophus California sea lion, Z. californianus Japanese sea lion, Z. japonicus – extinct (1950s) Galapagos sea lion, Z. wollebaeki Family Phocidae: true seals Family Odobenidae: walrus Physiology Diving adaptations There are many components that make up sea lion physiology and these processes control aspects of their behavior. Physiology dictates thermoregulation, osmoregulation, reproduction, metabolic rate, and many other aspects of sea lion ecology including but not limited to their ability to dive to great depths. The sea lions' bodies control heart rate, gas exchange, digestion rate, and blood flow to allow individuals to dive for a long period of time and prevent side effects of high pressure at depth. The high pressures associated with deep dives cause gases such as nitrogen to build up in tissues which are then released upon surfacing, possibly causing death. One of the ways sea lions deal with the extreme pressures is by limiting the amount of gas exchange that occurs when diving. The sea lion allows the alveoli to be compressed by the increasing water pressure thus forcing the surface air into cartilage lined airway just before the gas exchange surface. This process prevents any further oxygen exchange to the blood for muscles, requiring all muscles to be loaded with enough oxygen to last the duration of the dive. However, this shunt reduces the amount of compressed gases from entering tissues therefore reducing the risk of decompression sickness. The collapse of alveoli does not allow for any oxygen storage in the lungs, however. This means that sea lions must mitigate oxygen use in order to extend their dives. Oxygen availability is prolonged by the physiological control of heart rate in sea lions. By reducing heart rate to well below surface rates, oxygen is saved by reducing gas exchange as well as reducing the energy required for a high heart rate. Bradycardia is a control mechanism to allow a switch from pulmonary oxygen to oxygen stored in the muscles which is needed when the sea lions are diving to depth. Another way sea lions mitigate the oxygen obtained at the surface in dives is to reduce digestion rate. Digestion requires metabolic activity and therefore energy and oxygen are consumed during this process; however, sea lions can limit digestion rate and decrease it by at least 54%. This reduction in digestion results in a proportional reduction in oxygen use in the stomach and therefore a correlated oxygen supply for diving. Digestion rate in these sea lions increases back to normal rates immediately upon resurfacing. Oxygen depletion limits dive duration, but carbon dioxide (CO2) build-up also plays a role in the dive capabilities of many marine mammals. After a sea lion returns from a long dive, CO2 is not expired as fast as oxygen is replenished in the blood, due to the unloading complications with CO2. However, having more than normal levels of CO2 in the blood does not seem to adversely affect dive behavior. Compared to terrestrial mammals, sea lions have a higher tolerance to storing CO2 which is what normally tells mammals that they need to breathe. This ability to ignore a response to CO2 is likely brought on by increased carotid bodies which are sensors for oxygen levels that let the animal know its available oxygen supply. Yet, the sea lions cannot avoid the effects of gradual CO2 build-up which eventually causes the sea lions to spend more time at the surface after multiple repeated dives to allow for enough built up CO2 to be expired. Parasites and diseases Galapagos sea lions (Zalophus wollebaeki) can be infected with Philophthalmus zalophi, an eye fluke. These infections have heavy impacts on the survival of juveniles. The disease appears to be compounded by global warming. The number of infectious stages of different parasites species has a strong correlation with temperature change, therefore it is essential to consider the correlation between the increasing number of parasitic infections and climate changes. The Galapagos Islands go through seasonal changes in sea surface temperatures, which consist of high temperatures from the beginning of January through the month of May and lower temperatures throughout the rest of the year. Parasites surfaced in large numbers when the sea temperature was at its highest. Furthermore, data was collected by capturing sea lions in order to measure and determine their growth rates. Their growth rates were noted along with the citings of parasites which were found under the eyelid. The shocking results were that sea lions are affected by the parasites from the early ages of 3 weeks old up until the age of 4 to 8 months. The parasites found in the eye fluke did serious damage to the eye. From the data collected, 21 of the 91 survived; with a total of 70 deaths in just a span of two years. The parasites are attacking the pups at such young ages and causing the pups to not reach the age of reproduction. The death rates of the pups is surpassing the fertility rate by far. Since most pups are unable to reach the age of reproduction, the population is not growing fast enough to keep the species out of endangerment. Other parasites, like Anisakis and heartworm, can also infect sea lions. Australian sea lions (Neophoca cinerea) are also being affected by more frequent parasitic infections. The same method was used for the sea pups as on the Galapagos Islands, but in addition, the researchers in Australia took blood samples. The pups in Australia were being affected by hookworms, but they were also coming out in large numbers with warmer temperatures. New Zealand sea lion pups (Phocarctos hookeri) were also affected in really early ages by hookworms (Uncinaria). The difference is that in New Zealand researchers took the necessary steps and began treatment. The treatment seemed to be effective on the pups who have taken it. They found no traces of this infection afterwards. However, the percentage of pups who do have it is still relatively high at about 75%. Those pups who were treated had much better growth rates than those who did not. Overall parasites and hookworms are killing off enough pups to place them in endangerment. Parasites affect sea pups in various areas of the world. Reproductive success reduces immensely, survival methods, changes in health and growth have also been affected. Similarly, climate change has resulted in increased toxic algae blooms in the oceans. These toxins are ingested by sardines and other fish which are then eaten by the sea lions, causing neurological damage and diseases such as epilepsy. Gene expressions and diet Gene expressions are being used more often to detect the physiological responses to nutrition, as well as other stressors. In a study done with four Steller sea lions (Eumetopias jubatus), three of the four sea lions underwent a 70-day trial which consisted of unrestricted food intake, acute nutritional stress, and chronic nutritional stress. Results showed that individuals under nutritional stress down-regulated some cellular processes within their immune response and oxidative stress. Nutritional stress was considered the most proximate cause of population decline in this species. In New Zealand sea lions, north-to south gradients driven by temperature differences were shown to be key factors in the prey mix. Adult California sea lions eat about 5% to 8% of their body weight per day (). California sea lions feed mainly offshore in coastal areas. They eat a variety of prey—such as squid, anchovies, mackerel, rockfish, and sardines—found in upwelling areas. They also may take fish from commercial fishing gear, sport fishing lines, and fish passage facilities at dams and rivers. Geographic variation Geographic variation for sea lions have been determined by the observations of skulls of several Otariidae species; a general change in size corresponds with a change in latitude and primary productivity. Skulls of Australian sea lions from Western Australia were generally smaller in length whereas the largest skulls are from cool temperate localities. Otariidae are in the process of species divergence, much of which may be driven by local factors, particularly latitude and resources. Populations of a given species tend to be smaller in the tropics, increase in size with increasing latitude, and reach a maximum in sub-polar regions. In a cool climate and cold waters there should be a selective advantage in the relative reduction of body surface area resulting from increased size, since the metabolic rate is related more closely to body surface area than to body weight. Breeding and population Breeding methods and habits Sea lions, with three groups of pinnipeds, have multiple breeding methods and habits over their families but they remain relatively universal. Otariids, or eared sea lions, raise their young, mate, and rest in more earthly land or ice habitats. Their abundance and haul-out behavior have a direct effect on their on land breeding activity. Their seasonal abundance trend correlates with their breeding period between the austral summer of January to March. Their rookeries populate with newborn pups as well as male and female otariids that remain to defend their territories. At the end of the breeding period males disseminate for food and rest while females remain for nurturing. Other points in the year consist of a mix of ages and genders in the rookeries with haul-out patterns varying monthly. Steller sea lions, living an average of 15 to 20 years, begin their breeding season when adult males establish territories along the rookeries in early May. Male sea lions reach sexual maturity from ages 5 to 7 and do not become territorial until around 9 to 13 years of age. The females arrive in late May bringing in an increase of territorial defense through fighting and boundary displays. After a week births consist most usually of one pup with a perinatal period of 3 to 13 days. Steller sea lions have exhibited multiple competitive strategies for reproductive success. Sea lion mating is often polygamous as males usually mate with different females to increase fitness and success, leaving some males to not find a mate at all. Polygamous males rarely provide parental care towards the pup. Strategies used to monopolize females include the resource-defense polygyny, or occupying important female resources. This involves occupying and defending a territory with resources or features attractive to females during sexually receptive periods. Some of these factors may include pupping habitat and access to water. Other techniques include potentially limiting access of other males to females. Population Otaria flavescens (South American sea lion) lives along the Chilean coast with a population estimate of 165,000. According to the most recent surveys in northern and southern Chile the sealing period of the middle twentieth century that left a significant decline in sea lion population is recovering. The recovery is associated with less hunting, otariids rapid population growth, legislation on nature reserves, and new food resources. Haul-out patterns change the abundance of sea lions at particular times of the day, month, and year. Patterns in migration relate to temperature, solar radiation, and prey and water resources. Studies of South American sea lions and other otariids document maximum population on land during early afternoon, potentially due to haul-out during high air temperatures. Adult and subadult males do not show clear annual patterns, maximum abundance being found from October to January. Females and their pups hauled-out during austral winter months of June to September. Interactions with humans South American sea lions have been greatly impacted by human exploitation. During the late Holocene period to the middle of the twentieth century, hunter-gatherers along the Beagle Channel and northern Patagonia had greatly reduced the number of sea lions due to their hunting of the species and exploitation of the species' environment. Although sealing has been put to a halt, in many countries, such as Uruguay, the sea lion population continues to decline because of the drastic effects humans have on their ecosystems. As a result, South American sea lions have been foraging at higher tropical latitudes than they did prior to human exploitation. Fishermen play a key role in the endangerment of sea lions. Sea lions rely on fish, like pollock, as a food source and have to compete with fishermen for it. When fishermen are successful at their job, they greatly reduce the sea lion's food source, which in turn endangers the species. Also, human presence and human recreational activities can cause sea lions to engage in violent and aggressive actions. When humans come closer than 15 meters of a sea lion, the sea lions' vigilance increases because of the disturbance of humans. These disturbances can potentially cause sea lions to have psychological stress responses that cause the sea lions to retreat, sometimes even abandon their locations, and decreases the amount of time sea lions spend hauling out. New Zealand sea lions were also exploited from hunting and sealing, and as a result were extirpated from New Zealand's mainland for over 150 years, with their population being restricted to the subantarctic. In 1993, a female New Zealand sea lion gave birth on the mainland for the first time, and since then, they have slowly been recolonizing. These sea lions are the only pinnipeds that regularly move up to inland into forests. As consequence, they have been hit by cars on roads, deliberately killed, and been disturbed by dogs. Females need to move inland as a way to protect their pups, so roads, fences, residential areas, and private lands can inhibit their dispersal and breeding success. They have also adapted to commercial pine forests, and have given birth or nursed pups in residents' backyards and on golf courses. As one of the world's rarest sea lions, and an endangered and endemic species, efforts are being made to facilitate coexistence between them and humans. Sea lion attacks on humans are rare, but when humans come within approximately , it can be very unsafe. In a highly unusual attack in 2007 in Western Australia, a sea lion leapt from the water and seriously mauled a 13-year-old girl surfing behind a speedboat. The sea lion appeared to be preparing for a second attack when the girl was rescued. An Australian marine biologist suggested that the sea lion may have viewed the girl "like a rag doll toy" to be played with. In San Francisco, where an increasingly large population of California sea lions crowds docks along San Francisco Bay, incidents have been reported in recent years of swimmers being bitten on the legs by large, aggressive males, possibly as territorial acts. In April 2015, a sea lion attacked a 62-year-old man who was boating with his wife in San Diego. The attack left the man with a punctured bone. In May 2017, a sea lion grabbed and pulled a girl into the water by her dress before retreating. The child was sitting on a pier side in British Columbia while tourists were illegally feeding the sea lions when the incident took place. She was pulled out of the water with minor injuries and received antibiotic prophylactic treatment for seal finger infection from the superficial bite injury. There have also been documented events of sea lions assisting humans. One such notable instance of this is when Kevin Hines jumped off the Golden Gate Bridge in a suicide attempt and was helped to stay afloat by a sea lion until he was rescued by the Coast Guard. Sea lions have also been a focus of tourism in Australia and New Zealand. One of the main sites to view sea lions is in the Carnac Island Nature Reserve near Perth in Western Australia. This tourist site receives over 100,000 visitors, many of whom are recreational boaters and tourists, who can watch the male sea lions haul out on to the shore. They have sometimes been called "the unofficial welcoming committee of the Galápagos Islands". Gallery See also List of carnivorans by population References Further reading Healy, Jack (March 2015). Starving Sea Lions Washing Ashore by the Hundreds in California. The New York Times Sea lion Paraphyletic groups Mammal common names Fur trade
Sea lion
[ "Biology" ]
4,137
[ "Phylogenetics", "Paraphyletic groups" ]
60,267
https://en.wikipedia.org/wiki/Billion-Dollar%20Brain
Billion-Dollar Brain is a 1966 Cold War spy novel by Len Deighton. It was the fourth to feature an unnamed secret agent working for the British WOOC(P) intelligence agency. It follows The IPCRESS File (1962), Horse Under Water (1963), and Funeral in Berlin (1964). As in most of Deighton's novels, the plot of Billion Dollar Brain (1967) is intricate, with many dead ends. Plot The unnamed protagonist is ordered to Helsinki by Dawlish, his boss, to suppress a newspaper article, potentially embarrassing to the U.K. government, about to be published by a Finnish journalist. He finds the journalist murdered and coincidentally meets a young woman who attempts to recruit him into the British Intelligence. This woman, Signe Laine, is both romantically connected to and working for the protagonist's old American friend Harvey Newbegin (who also appeared in Funeral in Berlin). Newbegin in turn attempts to recruit him into a private intelligence outfit, whose network is operated by 'The Brain', a billion dollar super-computer owned by eccentric Texan billionaire General Midwinter. Midwinter is using his agency and private army to start an uprising in Latvia, at the time a part of the USSR, to end Communism in the Eastern bloc and tip the balance of the Cold War in favour of the West. After discovering this and also the fact that a package Newbegin wants delivered from England to Finland contains virus-contaminated eggs, stolen from a British research institute, the protagonist treks from Finland through Riga, Leningrad, New York City, Texas and back to London. He infiltrates Midwinter's organization, braving unforgiving environments, violence and shifting loyalties, eventually to return to the Baltic to stop the virus from falling into the hands of the Soviets and the madman billionaire and protect British reputations in the process. Film adaptation The novel was filmed as Billion Dollar Brain in 1967, the third instalment of the Harry Palmer series of films based on Deighton's novels, featuring Michael Caine; it was a commercial flop. References 1966 British novels "Unnamed hero" novels British novels adapted into films Cold War spy novels Novels set in Helsinki Jonathan Cape books Cold War in popular culture Fiction about bioterrorism Biological weapons in popular culture
Billion-Dollar Brain
[ "Biology" ]
473
[ "Biological weapons in popular culture", "Biological warfare" ]
60,288
https://en.wikipedia.org/wiki/Pewter
Pewter () is a malleable metal alloy consisting of tin (85–99%), antimony (approximately 5–10%), copper (2%), bismuth, and sometimes silver. In the past, it was an alloy of tin and lead, but most modern pewter, in order to prevent lead poisoning, is not made with lead. Pewter has a low melting point, around , depending on the exact mixture of metals. The word pewter is possibly a variation of "spelter", a term for zinc alloys (originally a colloquial name for zinc). History Pewter was first used around the beginning of the Bronze Age in the Near East. The earliest known piece of pewter was found in an Egyptian tomb, , but it is unlikely that this was the first use of the material. Pewter was used for decorative metal items and tableware in ancient times by the Egyptians and later the Romans, and came into extensive use in Europe from the Middle Ages until the various developments in pottery and glass-making during the 18th and 19th centuries. Pewter was a leading material for producing plates, cups, and bowls before the wide adoption of porcelain. Mass production of pottery, porcelain and glass products have almost universally replaced pewter in daily life, although pewter artifacts continue to be produced, mainly as decorative or specialty items. Pewter was also used around East Asia. Although some items still exist, ancient Roman pewter is rare. Lidless mugs and lidded tankards may be the most familiar pewter artifacts from the late 17th and 18th centuries, although the metal was also used for many other items including porringers (shallow bowls), plates, dishes, basins, spoons, measures, flagons, communion cups, teapots, sugar bowls, beer steins (tankards), and cream jugs. In the early 19th century, changes in fashion caused a decline in the use of pewter flatware. At the same time, production increased of both cast and spun pewter tea sets, whale-oil lamps, candlesticks, and so on. Later in the century, pewter alloys were often used as a base metal for silver-plated objects. In the late 19th century, pewter came back into fashion with the revival of medieval objects for decoration. New replicas of medieval pewter objects were created, and collected for decoration. Today, pewter is used in decorative objects, mainly collectible statuettes and figurines, game figures, aircraft and other models, (replica) coins, pendants, plated jewellery and so on. Certain athletic contests, such as the United States Figure Skating Championships, award pewter medals to fourth-place finishers. Types In antiquity, pewter was tin alloyed with lead and sometimes also copper. Older pewters with higher lead content are heavier, tarnish faster, and their oxidation has a darker, silver-gray color. Pewters containing lead are no longer used in items that will come in contact with the human body (such as cups, plates, or jewelry), due to the toxicity of lead. Modern pewters are available that are completely free of lead, although many pewters containing lead are still being produced for other purposes. A typical European casting alloy contains 94% tin, 1% copper and 5% antimony. A European pewter sheet would contain 92% tin, 2% copper, and 6% antimony. Asian pewter, produced mostly in Malaysia, Singapore, and Thailand, contains a higher percentage of tin, usually 97.5% tin, 1% copper, and 1.5% antimony. This makes the alloy slightly softer. The term Mexican pewter is used for any of various alloys of aluminium that are used for decorative items. Pewter is also used to imitate platinum in costume jewelry. Properties Pewter, being a softer material, can be manipulated in various ways such as being cast, hammered, turned, spun and engraved. Given that pewter is soft at room temperature, a pewter bell does not ring clearly. Cooling it in liquid nitrogen hardens it and enables it to ring, but also makes it more brittle. See also Britannia metal English pewter Spin casting Solder Notes References External links . PewterBank Fusible alloys Tin alloys
Pewter
[ "Chemistry", "Materials_science" ]
881
[ "Tin alloys", "Alloys", "Metallurgy", "Fusible alloys" ]
60,289
https://en.wikipedia.org/wiki/Homeobox
A homeobox is a DNA sequence, around 180 base pairs long, that regulates large-scale anatomical features in the early stages of embryonic development. Mutations in a homeobox may change large-scale anatomical features of the full-grown organism. Homeoboxes are found within genes that are involved in the regulation of patterns of anatomical development (morphogenesis) in animals, fungi, plants, and numerous single cell eukaryotes. Homeobox genes encode homeodomain protein products that are transcription factors sharing a characteristic protein fold structure that binds DNA to regulate expression of target genes. Homeodomain proteins regulate gene expression and cell differentiation during early embryonic development, thus mutations in homeobox genes can cause developmental disorders. Homeosis is a term coined by William Bateson to describe the outright replacement of a discrete body part with another body part, e.g. antennapedia—replacement of the antenna on the head of a fruit fly with legs. The "homeo-" prefix in the words "homeobox" and "homeodomain" stems from this mutational phenotype, which is observed when some of these genes are mutated in animals. The homeobox domain was first identified in a number of Drosophila homeotic and segmentation proteins, but is now known to be well-conserved in many other animals, including vertebrates. Discovery The existence of homeobox genes was first discovered in Drosophila by isolating the gene responsible for a homeotic transformation where legs grow from the head instead of the expected antennae. Walter Gehring identified a gene called antennapedia that caused this homeotic phenotype. Analysis of antennapedia revealed that this gene contained a 180 base pair sequence that encoded a DNA binding domain, which William McGinnis termed the "homeobox". The existence of additional Drosophila genes containing the antennapedia homeobox sequence was independently reported by Ernst Hafen, Michael Levine, William McGinnis, and Walter Jakob Gehring of the University of Basel in Switzerland and Matthew P. Scott and Amy Weiner of Indiana University in Bloomington in 1984. Isolation of homologous genes by Edward de Robertis and William McGinnis revealed that numerous genes from a variety of species contained the homeobox. Subsequent phylogenetic studies detailing the evolutionary relationship between homeobox-containing genes showed that these genes are present in all bilaterian animals. Homeodomain structure The characteristic homeodomain protein fold consists of a 60-amino acid long domain composed of three alpha helices. The following shows the consensus homeodomain (~60 amino acid chain): Helix 1 Helix 2 Helix 3/4 __ __ _ RRRKRTAYTRYQLLELEKEFHFNRYLTRRRRIELAHSLNLTERHIKIWFQNRRMKWKKEN ....|....|....|....|....|....|....|....|....|....|....|....| 10 20 30 40 50 60 Helix 2 and helix 3 form a so-called helix-turn-helix (HTH) structure, where the two alpha helices are connected by a short loop region. The N-terminal two helices of the homeodomain are antiparallel and the longer C-terminal helix is roughly perpendicular to the axes of the first two. It is this third helix that interacts directly with DNA via a number of hydrogen bonds and hydrophobic interactions, as well as indirect interactions via water molecules, which occur between specific side chains and the exposed bases within the major groove of the DNA. Homeodomain proteins are found in eukaryotes. Through the HTH motif, they share limited sequence similarity and structural similarity to prokaryotic transcription factors, such as lambda phage proteins that alter the expression of genes in prokaryotes. The HTH motif shows some sequence similarity but a similar structure in a wide range of DNA-binding proteins (e.g., cro and repressor proteins, homeodomain proteins, etc.). One of the principal differences between HTH motifs in these different proteins arises from the stereochemical requirement for glycine in the turn which is needed to avoid steric interference of the beta-carbon with the main chain: for cro and repressor proteins the glycine appears to be mandatory, whereas for many of the homeotic and other DNA-binding proteins the requirement is relaxed. Sequence specificity Homeodomains can bind both specifically and nonspecifically to B-DNA with the C-terminal recognition helix aligning in the DNA's major groove and the unstructured peptide "tail" at the N-terminus aligning in the minor groove. The recognition helix and the inter-helix loops are rich in arginine and lysine residues, which form hydrogen bonds to the DNA backbone. Conserved hydrophobic residues in the center of the recognition helix aid in stabilizing the helix packing. Homeodomain proteins show a preference for the DNA sequence 5'-TAAT-3'; sequence-independent binding occurs with significantly lower affinity. The specificity of a single homeodomain protein is usually not enough to recognize specific target gene promoters, making cofactor binding an important mechanism for controlling binding sequence specificity and target gene expression. To achieve higher target specificity, homeodomain proteins form complexes with other transcription factors to recognize the promoter region of a specific target gene. Biological function Homeodomain proteins function as transcription factors due to the DNA binding properties of the conserved HTH motif. Homeodomain proteins are considered to be master control genes, meaning that a single protein can regulate expression of many target genes. Homeodomain proteins direct the formation of the body axes and body structures during early embryonic development. Many homeodomain proteins induce cellular differentiation by initiating the cascades of coregulated genes required to produce individual tissues and organs. Other proteins in the family, such as NANOG are involved in maintaining pluripotency and preventing cell differentiation. Regulation Hox genes and their associated microRNAs are highly conserved developmental master regulators with tight tissue-specific, spatiotemporal control. These genes are known to be dysregulated in several cancers and are often controlled by DNA methylation. The regulation of Hox genes is highly complex and involves reciprocal interactions, mostly inhibitory. Drosophila is known to use the polycomb and trithorax complexes to maintain the expression of Hox genes after the down-regulation of the pair-rule and gap genes that occurs during larval development. Polycomb-group proteins can silence the Hox genes by modulation of chromatin structure. Mutations Mutations to homeobox genes can produce easily visible phenotypic changes in body segment identity, such as the Antennapedia and Bithorax mutant phenotypes in Drosophila. Duplication of homeobox genes can produce new body segments, and such duplications are likely to have been important in the evolution of segmented animals. Evolution Phylogenetic analysis of homeobox gene sequences and homeodomain protein structures suggests that the last common ancestor of plants, fungi, and animals had at least two homeobox genes. Molecular evidence shows that some limited number of Hox genes have existed in the Cnidaria since before the earliest true Bilatera, making these genes pre-Paleozoic. It is accepted that the three major animal ANTP-class clusters, Hox, ParaHox, and NK (MetaHox), are the result of segmental duplications. A first duplication created MetaHox and ProtoHox, the latter of which later duplicated into Hox and ParaHox. The clusters themselves were created by tandem duplications of a single ANTP-class homeobox gene. Gene duplication followed by neofunctionalization is responsible for the many homeobox genes found in eukaryotes. Comparison of homeobox genes and gene clusters has been used to understand the evolution of genome structure and body morphology throughout metazoans. Types of homeobox genes Hox genes Hox genes are the most commonly known subset of homeobox genes. They are essential metazoan genes that determine the identity of embryonic regions along the anterior-posterior axis. The first vertebrate Hox gene was isolated in Xenopus by Edward De Robertis and colleagues in 1984. The main interest in this set of genes stems from their unique behavior and arrangement in the genome. Hox genes are typically found in an organized cluster. The linear order of Hox genes within a cluster is directly correlated to the order in which they are expressed in both time and space during development. This phenomenon is called colinearity. Mutations in these homeotic genes cause displacement of body segments during embryonic development. This is called ectopia. For example, when one gene is lost the segment develops into a more anterior one, while a mutation that leads to a gain of function causes a segment to develop into a more posterior one. Famous examples are Antennapedia and bithorax in Drosophila, which can cause the development of legs instead of antennae and the development of a duplicated thorax, respectively. In vertebrates, the four paralog clusters are partially redundant in function, but have also acquired several derived functions. For example, HoxA and HoxD specify segment identity along the limb axis. Specific members of the Hox family have been implicated in vascular remodeling, angiogenesis, and disease by orchestrating changes in matrix degradation, integrins, and components of the ECM. HoxA5 is implicated in atherosclerosis. HoxD3 and HoxB3 are proinvasive, angiogenic genes that upregulate b3 and a5 integrins and Efna1 in ECs, respectively. HoxA3 induces endothelial cell (EC) migration by upregulating MMP14 and uPAR. Conversely, HoxD10 and HoxA5 have the opposite effect of suppressing EC migration and angiogenesis, and stabilizing adherens junctions by upregulating TIMP1/downregulating uPAR and MMP14, and by upregulating Tsp2/downregulating VEGFR2, Efna1, Hif1alpha and COX-2, respectively. HoxA5 also upregulates the tumor suppressor p53 and Akt1 by downregulation of PTEN. Suppression of HoxA5 has been shown to attenuate hemangioma growth. HoxA5 has far-reaching effects on gene expression, causing ~300 genes to become upregulated upon its induction in breast cancer cell lines. HoxA5 protein transduction domain overexpression prevents inflammation shown by inhibition of TNFalpha-inducible monocyte binding to HUVECs. LIM genes LIM genes (named after the initial letters of the names of three proteins where the characteristic domain was first identified) encode two 60 amino acid cysteine and histidine-rich LIM domains and a homeodomain. The LIM domains function in protein-protein interactions and can bind zinc molecules. LIM domain proteins are found in both the cytosol and the nucleus. They function in cytoskeletal remodeling, at focal adhesion sites, as scaffolds for protein complexes, and as transcription factors. Pax genes Most Pax genes contain a homeobox and a paired domain that also binds DNA to increase binding specificity, though some Pax genes have lost all or part of the homeobox sequence. Pax genes function in embryo segmentation, nervous system development, generation of the frontal eye fields, skeletal development, and formation of face structures. Pax 6 is a master regulator of eye development, such that the gene is necessary for development of the optic vesicle and subsequent eye structures. POU genes Proteins containing a POU region consist of a homeodomain and a separate, structurally homologous POU domain that contains two helix-turn-helix motifs and also binds DNA. The two domains are linked by a flexible loop that is long enough to stretch around the DNA helix, allowing the two domains to bind on opposite sides of the target DNA, collectively covering an eight-base segment with consensus sequence 5'-ATGCAAAT-3'. The individual domains of POU proteins bind DNA only weakly, but have strong sequence-specific affinity when linked. The POU domain itself has significant structural similarity with repressors expressed in bacteriophages, particularly lambda phage. Plant homeobox genes As in animals, the plant homeobox genes code for the typical 60 amino acid long DNA-binding homeodomain or in case of the TALE (three amino acid loop extension) homeobox genes for an atypical homeodomain consisting of 63 amino acids. According to their conserved intron–exon structure and to unique codomain architectures they have been grouped into 14 distinct classes: HD-ZIP I to IV, BEL, KNOX, PLINC, WOX, PHD, DDT, NDX, LD, SAWADEE and PINTOX. Conservation of codomains suggests a common eukaryotic ancestry for TALE and non-TALE homeodomain proteins. Human homeobox genes The Hox genes in humans are organized in four chromosomal clusters: ParaHox genes are analogously found in four areas. They include CDX1, CDX2, CDX4; GSX1, GSX2; and PDX1. Other genes considered Hox-like include EVX1, EVX2; GBX1, GBX2; MEOX1, MEOX2; and MNX1. The NK-like (NKL) genes, some of which are considered "MetaHox", are grouped with Hox-like genes into a large ANTP-like group. Humans have a "distal-less homeobox" family: DLX1, DLX2, DLX3, DLX4, DLX5, and DLX6. Dlx genes are involved in the development of the nervous system and of limbs. They are considered a subset of the NK-like genes. Human TALE (Three Amino acid Loop Extension) homeobox genes for an "atypical" homeodomain consist of 63 rather than 60 amino acids: IRX1, IRX2, IRX3, IRX4, IRX5, IRX6; MEIS1, MEIS2, MEIS3; MKX; PBX1, PBX2, PBX3, PBX4; PKNOX1, PKNOX2; TGIF1, TGIF2, TGIF2LX, TGIF2LY. In addition, humans have the following homeobox genes and proteins: LIM-class: ISL1, ISL2; LHX1, LHX2, LHX3, LHX4, LHX5, LHX6, LHX8, LHX9; LMX1A, LMX1B POU-class: HDX; POU1F1; POU2F1; POU2F2; POU2F3; POU3F1; POU3F2; POU3F3; POU3F4; POU4F1; POU4F2; POU4F3; POU5F1; POU5F1P1; POU5F1P4; POU5F2; POU6F1; and POU6F2 CERS-class: LASS2, LASS3, LASS4, LASS5, LASS6; HNF-class: HMBOX1; HNF1A, HNF1B; SINE-class: SIX1, SIX2, SIX3, SIX4, SIX5, SIX6 CUT-class: ONECUT1, ONECUT2, ONECUT3; CUX1, CUX2; SATB1, SATB2; ZF-class: ADNP, ADNP2; TSHZ1, TSHZ2, TSHZ3; ZEB1, ZEB2; ZFHX2, ZFHX3, ZFHX4; ZHX1, HOMEZ; PRD-class: ALX1 (CART1), ALX3, ALX4; ARGFX; ARX; DMBX1; DPRX; DRGX; DUXA, DUXB, DUX (1, 2, 3, 4, 4c, 5); ESX1; GSC, GSC2; HESX1; HOPX; ISX; LEUTX; MIXL1; NOBOX; OTP; OTX1, OTX2, CRX; PAX2, PAX3, PAX4, PAX5, PAX6, PAX7, PAX8; PHOX2A, PHOX2B; PITX1, PITX2, PITX3; PROP1; PRRX1, PRRX2; RAX, RAX2; RHOXF1, RHOXF2/2B; SEBOX; SHOX, SHOX2; TPRX1; UNCX; VSX1, VSX2 NKL-class: BARHL1, BARHL2; BARX1, BARX2; BSX; DBX1, DBX2; EMX1, EMX2; EN1, EN2; HHEX; HLX1; LBX1, LBX2; MSX1, MSX2; NANOG; NOTO; TLX1, TLX2, TLX3; TSHZ1, TSHZ2, TSHZ3; VAX1, VAX2, VENTX; Nkx: NKX2-1, NKX2-4; NKX2-2, NKX2-8; NKX3-1, NKX3-2; NKX2-3, NKX2-5, NKX2-6; HMX1, HMX2, HMX3; NKX6-1; NKX6-2; NKX6-3; See also Evolutionary developmental biology Body plan References Further reading External links The Homeodomain Resource (National Human Genome Research Institute, National Institutes of Health) HomeoDB: a database of homeobox genes diversity. Zhong YF, Butts T, Holland PWH, since 2008. Genes Developmental genetics Protein domains Transcription factors Evolutionary developmental biology
Homeobox
[ "Chemistry", "Biology" ]
3,979
[ "Transcription factors", "Gene expression", "Protein classification", "Signal transduction", "Protein domains", "Induced stem cells" ]
60,290
https://en.wikipedia.org/wiki/Fisting
Fisting—also known as fist fucking (FF), handballing, and brachioproctic or brachiovaginal insertion—is a sexual activity that involves inserting one or more hands into the rectum (anal fisting) or the vagina (vaginal fisting). Fisting may be performed on oneself (self-fisting) or performed on one person by another. People who engage in fisting are often called "fisters". History Fisting's emergence as a sexual practice is commonly attributed to gay male culture. However, its precise origin is disputed. Some claim fisting began in the twentieth century, whereas others assert the practice dates back to the eighteenth century or earlier. Sex educator Robert Morgan Lawrence, for example, claimed the practice dates back thousands of years. Fisting became more visible and popular around the time of the gay liberation movement, during which sex clubs partly or entirely dedicated to the practice emerged. The hanky code allowed fisters to surreptitiously solicit partners in public by wearing red handkerchiefs. By 1973, gay bars, bathhouses and clubs such as the Red Star Saloon in South of Market, San Francisco were openly advertising fisting. The most famous fisting club was the Catacombs, a gay and lesbian S/M leather fisting club in San Francisco that operated from 1975 to 1984. The Handball Express was another notable fisting club. Crisco was commonly used as a lubricant—prominently featured in pornographic films like Erotic Hands (1980)—before specialized products became available. During the AIDS crisis in the 1980s, cities such as San Francisco and New York forcibly closed gay establishments that, accurately or not, were believed to permit unsafe sex, including fisting. Consequently, most venues that permitted fisting, namely gay bathhouses and sex clubs, permanently closed. While the closures were supported by some in the gay community (such as Randy Shilts), others—including the gay press, San Francisco AIDS Foundation, and San Francisco Human Rights Commission—criticized the decision as counterproductive and a violation of civil liberties. Many regarded the closures as a product of anti-gay hysteria as well as anti-sex attitudes and disapproval of gay sexual activity in particular. Meanwhile, safe sex advocates launched a public health campaign to promote the use of gloves. Fisting's visibility and popularity has grown in the twenty-first century, likely due in large part to the internet and greater HIV prevention and treatment options. Fisting pornography is now widely available online and fisters can easily meet via social media platforms, some of which are tailored for fisting (e.g. Recon). Gay bathhouses have made a comeback, with some even advertising fisting parties and live demonstrations. Techniques Hands A common technique, particularly for beginners, is to extend four fingers straight with thumb tucked beneath. Some refer to this technique as the "silent duck" or "duck-billing", because the hand position resembles a duck beak. In more vigorous forms of fisting, often referred to as "punching" or "punchfisting", the hand is partly or fully clenched into a fist before, during, and/or after insertion. Taking two hands at once is referred to as "double fisting". In the case of double fisting, pleasure may be derived more from the stretching of the anus or vagina than from the in-and-out movement of hands. Oftentimes, hands are inserted up to the wrist. Some experienced fisters enjoy deeper penetration, sometimes referred to as "depth play", in which hands may be inserted as far as the elbow. Sometimes a penetrative partner may insert their penis or a sex toy at the same time as their hand(s), to masturbate inside the receptive partner's rectum or vagina. While rare, some experienced fisters are capable of taking three hands. Lubricants Lubricants designed specifically for fisting or with fisting as an intended use are widely available in some countries. Sex toys Fisters commonly use dildos to loosen the vagina or anus before or during a fisting session. Terminology Common fisting-related terms include: Fist; fist fuck: "the practice of inserting the hand (and part of the arm) into a partner's anus (or vagina) for the sexual pleasure of all involved." Fister; fist-fucker: a practitioner of fisting; or, depending on context, a penetrative partner in a fisting session involving two or more people. Fistee: a receptive partner in a session involving two or more people. Punch fisting: fisting in which the hand is partly or fully clenched into a fist before, during, and/or after insertion. Double fisting: fisting in which two hands are inserted into the anus or vagina. Rosebud or rectal prolapse: when walls of the rectum have prolapsed to such a degree that they protrude out of the anus. Can be induced by fisting or other ass play (named due to the prolapse's resemblance to a rose flower). Bloom: "bloom" is used as a verb (e.g. "blooming") to describe the act of inducing a rosebud or prolapse, or the moment when a rosebud or prolapse becomes visible. According to The Routledge Dictionary of Modern American Slang and Unconventional English, "fisting" and "fist fuck" entered the American slang lexicon no later than 1969. Risks Fisting can cause laceration or perforation of the vagina, perineum, rectum, or colon, resulting in serious injury and even death. As anal fisting may cause damage to the rectal mucosa, it may increase the likelihood of transmission of sexually transmitted infections. In addition, sexual activities that force air into the vagina can lead to a fatal air embolism, predominantly during pregnancy. A 2021 study found that men who practiced anal fisting were more likely to experience fecal incontinence than men who engaged in other forms of anal sex. 18.1% of men who engaged in fisting experienced some form of fecal incontinence in the last month, compared to 7.2% of men who did not engage in fisting. However, more research is required to establish a causal link, and whether or not this is temporary or has a long-term impact. A 2022 report published in the journal Sex Education found that "clinical and forensic research has over-inflated the ‘dangers’ of fisting without an understanding of contexts in which fisting takes place." It also found that anal and vaginal fisting present "low to no risk" of sexually transmitted infections. Gloves, lubricant, and fingernail trimming can help reduce risk. In popular culture Fisting has been depicted or referenced in art, music, cinema and other forms of popular culture. Examples include: Art The San Francisco South of Market Leather History Alley, opened in 2017, is art that honors leather culture and community members including Steve McEachern (owner of the Catacombs, a now defunct gay and lesbian S/M leather fisting club in the South of Market area of San Francisco) and Bert Herrman (a fisting community leader, author, and publisher). Erotic photography by Robert Mapplethorpe, including "Double Fist Fuck" (1977) and "Fist Fuck / Full Body" (1978) and “Helmut and Brooks, NYC, 1978”. Erotica by Tom of Finland, Dom Orejudos, Bill Schmeling, Chuck Arnett, Gengoroh Tagame, and Drubskin Hanky Panky (2020) by Kent Monkman Performance art In Solar Anus (1999), Ron Athey challenged the feminization and sexual roles associated with male penetration by both giving and receiving penetration through self-fisting. In Being Green (2009), Jess Dobkin was fisted while dressed as the hand puppet Kermit the Frog. In 2013, Detroit-based artist Jerry Vile placed a 4-ft. tall can of Crisco in front of the fist-shaped Monument to Joe Louis for "helping to ease the pain of Detroit's bankruptcy." Music The album cover for Slide... Easy In by Rod McKuen (1977) depicts gay porn star Bruno's fist clenching "Disco" shortening. The inscription states "this was a project everyone had to get into; not just on the surface, but deeply—and together. If you don't feel "easy in" then perhaps your threshold of pain or pleasure needs looking into." (The European edition of the album featured a different cover because the original was considered "too outrageous.") "Krisco Kisses" (1984) by Frankie Goes to Hollywood "Stinkfist" (1996) by Tool Music video for "House of Air" (2017) by Brendan Maclean Film Caligula (1979) Cruising (1980) South Park: Bigger, Longer & Uncut (1999) Fifty Shades of Grey (2015) Pornographic films Drive (1974) Candy Strippers (1978) The Other Side of Aspen (1978) Erotic Hands (1980) Tampa Tushy Fest, Part 1 (1999) Television In 1993, Julian Clary joked "I've just been fisting Norman Lamont 'round the corner" during a live broadcast of the British Comedy Awards viewed by 13 million people. In South Park season 5 episode 14 ("Butters' Very Own Episode," December 2001), Butters' father watches the film "Fisting Firemen 9" at a gay adult theater. Judges and contestants on RuPaul's Drag Race and its spin-offs are known to sometimes make fisting-related jokes. Literature All trilogy (1969–1971) by Dirk Vanden Maitreya (1978) by Severo Sarduy The Divine Androgyne According To Purusha (1981) by Peter Christopher Larkin Trust: The Handbook: A Guide to the Sensual and Spiritual Art of Handballing (1991) by Bert Herrman A Hand in the Bush: The Fine Art of Vaginal Fisting (1997) by Deborah Addington Clubs Crisco Disco, discotheque in New York City (1970s-1980s) Catacombs, sex club in San Francisco (1975 to 1981, reopened at another location from 1982 to 1984) Mineshaft, bar and sex club in New York City (1976–1985) Caldron, sex club in San Francisco (1980–1984) Crisco Club, gay bar and sex club in Florence, Italy (est. 1981) La Fistinière, fisting guesthouse in Assigny, France (2007–2018) Le One Way, gay bar and sex club in Paris, France Other In 2011, adult performers Jiz Lee and Courtney Trouble proclaimed October 21 as "International Fisting Day," which Lee described as "a day of international celebration and education" intended in part to combat censorship. In the handkerchief code, the color red signifies interest in fisting. Legal status United Kingdom In the United Kingdom, performing fisting and fisting pornography are both legal. However, until 2019, the Crown Prosecution Service (CPS) considered publication of fisting material to be grounds for prosecution under the Obscene Publications Act 1959 and Section 63 of the Criminal Justice and Immigration Act 2008, the latter of which prohibits so-called "extreme pornography." In 1998, the University of Central England was involved in a controversy when a library book by Robert Mapplethorpe was confiscated from a student by the police, who informed the university that two photographs in the book (including one involving fisting) would have to be removed. If the university agreed to the removal (which it did not) the book would be returned. The two photographs, which were deemed possibly prosecutable as obscenity, included “Helmut and Brooks, NYC, 1978”, which shows anal fisting. However, after a delay of about six months, the affair came to an end when Peter Knight, the Vice-Chancellor of the university, was informed that no legal action would be taken. The book was returned to the university library without removal of the photos. In R v Peacock (2012), the jury found Michael Peacock not guilty of breaching the Obscene Publications Act for selling DVDs containing anal fisting. That same year, a jury took less than 90 minutes to acquit Simon Walsh for possession of images of anal fisting that the Crown Prosecution Service (CPS) alleged constituted illegal, extreme pornography. In 2019, the CPS declared it would no longer prosecute pornography produced by consenting adults engaging in legal acts, including fisting. United States Fisting is on the Cambria List, meant to be a list of sex acts that carry risk of prosecution under U.S. obscenity law, created by Paul Cambria in 2001 to help producers avoid obscenity lawsuits. However, as of 2019, the Cambria List is generally regarded as obsolete within the American pornography industry. See also Anal sex Sex toy Sexual intercourse Sexual penetration References Donovan B; Tindall B; Cooper D. Brachioproctic eroticism and transmission of retrovirus associated with acquired immune deficiency syndrome (AIDS). Genitourin Med. 1986 Dec;62(6):390-2. Medical terminology and some information on risks were taken from The Intelligent Man's Guide To Handball, a guide to man-on-man fisting. External links Anal eroticism Sexual acts Vagina Anus Fetish subculture BDSM activities
Fisting
[ "Biology" ]
2,807
[ "Sexual acts", "Behavior", "Sexuality", "Mating" ]
60,343
https://en.wikipedia.org/wiki/Sediment
Sediment is a solid material that is transported to a new location where it is deposited. It occurs naturally and, through the processes of weathering and erosion, is broken down and subsequently transported by the action of wind, water, or ice or by the force of gravity acting on the particles. For example, sand and silt can be carried in suspension in river water and on reaching the sea bed deposited by sedimentation; if buried, they may eventually become sandstone and siltstone (sedimentary rocks) through lithification. Sediments are most often transported by water (fluvial processes), but also wind (aeolian processes) and glaciers. Beach sands and river channel deposits are examples of fluvial transport and deposition, though sediment also often settles out of slow-moving or standing water in lakes and oceans. Desert sand dunes and loess are examples of aeolian transport and deposition. Glacial moraine deposits and till are ice-transported sediments. Classification Sediment can be classified based on its grain size, grain shape, and composition. Grain size Sediment size is measured on a log base 2 scale, called the "Phi" scale, which classifies particles by size from "colloid" to "boulder". Shape The shape of particles can be defined in terms of three parameters. The form is the overall shape of the particle, with common descriptions being spherical, platy, or rodlike. The roundness is a measure of how sharp grain corners are. This varies from well-rounded grains with smooth corners and edges to poorly rounded grains with sharp corners and edges. Finally, surface texture describes small-scale features such as scratches, pits, or ridges on the surface of the grain. Form Form (also called sphericity) is determined by measuring the size of the particle on its major axes. William C. Krumbein proposed formulas for converting these numbers to a single measure of form, such as where , , and are the long, intermediate, and short axis lengths of the particle. The form varies from 1 for a perfectly spherical particle to very small values for a platelike or rodlike particle. An alternate measure was proposed by Sneed and Folk: which, again, varies from 0 to 1 with increasing sphericity. Roundness Roundness describes how sharp the edges and corners of particle are. Complex mathematical formulas have been devised for its precise measurement, but these are difficult to apply, and most geologists estimate roundness from comparison charts. Common descriptive terms range from very angular to angular to subangular to subrounded to rounded to very rounded, with increasing degree of roundness. Surface texture Surface texture describes the small-scale features of a grain, such as pits, fractures, ridges, and scratches. These are most commonly evaluated on quartz grains, because these retain their surface markings for long periods of time. Surface texture varies from polished to frosted, and can reveal the history of transport of the grain; for example, frosted grains are particularly characteristic of aeolian sediments, transported by wind. Evaluation of these features often requires the use of a scanning electron microscope. Composition Composition of sediment can be measured in terms of: Parent rock lithology Mineral composition Chemical make-up This leads to an ambiguity in which clay can be used as both a size-range and a composition (see clay minerals). Sediment transport Sediment is transported based on the strength of the flow that carries it and its own size, volume, density, and shape. Stronger flows will increase the lift and drag on the particle, causing it to rise, while larger or denser particles will be more likely to fall through the flow. Fluvial processes Aeolian processes: wind Wind results in the transportation of fine sediment and the formation of sand dune fields and soils from airborne dust. Glacial processes Glaciers carry a wide range of sediment sizes, and deposit it in moraines. Mass balance The overall balance between sediment in transport and sediment being deposited on the bed is given by the Exner equation. This expression states that the rate of increase in bed elevation due to deposition is proportional to the amount of sediment that falls out of the flow. This equation is important in that changes in the power of the flow change the ability of the flow to carry sediment, and this is reflected in the patterns of erosion and deposition observed throughout a stream. This can be localized, and simply due to small obstacles; examples are scour holes behind boulders, where flow accelerates, and deposition on the inside of meander bends. Erosion and deposition can also be regional; erosion can occur due to dam removal and base level fall. Deposition can occur due to dam emplacement that causes the river to pool and deposit its entire load, or due to base level rise. Shores and shallow seas Seas, oceans, and lakes accumulate sediment over time. The sediment can consist of terrigenous material, which originates on land, but may be deposited in either terrestrial, marine, or lacustrine (lake) environments, or of sediments (often biological) originating in the body of water. Terrigenous material is often supplied by nearby rivers and streams or reworked marine sediment (e.g. sand). In the mid-ocean, the exoskeletons of dead organisms are primarily responsible for sediment accumulation. Deposited sediments are the source of sedimentary rocks, which can contain fossils of the inhabitants of the body of water that were, upon death, covered by accumulating sediment. Lake bed sediments that have not solidified into rock can be used to determine past climatic conditions. Key marine depositional environments The major areas for deposition of sediments in the marine environment include: Littoral sands (e.g. beach sands, runoff river sands, coastal bars and spits, largely clastic with little faunal content) The continental shelf (silty clays, increasing marine faunal content). The shelf margin (low terrigenous supply, mostly calcareous faunal skeletons) The shelf slope (much more fine-grained silts and clays) Beds of estuaries with the resultant deposits called "bay mud". One other depositional environment which is a mixture of fluvial and marine is the turbidite system, which is a major source of sediment to the deep sedimentary and abyssal basins as well as the deep oceanic trenches. Any depression in a marine environment where sediments accumulate over time is known as a sediment trap. The null point theory explains how sediment deposition undergoes a hydrodynamic sorting process within the marine environment leading to a seaward fining of sediment grain size. Environmental issues Erosion and agricultural sediment delivery to rivers One cause of high sediment loads is slash and burn and shifting cultivation of tropical forests. When the ground surface is stripped of vegetation and then seared of all living organisms, the upper soils are vulnerable to both wind and water erosion. In a number of regions of the earth, entire sectors of a country have become erodible. For example, on the Madagascar high central plateau, which constitutes approximately ten percent of that country's land area, most of the land area is devegetated, and gullies have eroded into the underlying soil to form distinctive gulleys called lavakas. These are typically wide, long and deep. Some areas have as many as 150 lavakas/square kilometer, and lavakas may account for 84% of all sediments carried off by rivers. This siltation results in discoloration of rivers to a dark red brown color and leads to fish kills. In addition, sedimentation of river basins implies sediment management and siltation costs. The cost of removing an estimated 135 million m3 of accumulated sediments due to water erosion only is likely exceeding 2.3 billion euro (€) annually in the EU and UK, with large regional differences between countries. Erosion is also an issue in areas of modern farming, where the removal of native vegetation for the cultivation and harvesting of a single type of crop has left the soil unsupported. Many of these regions are near rivers and drainages. Loss of soil due to erosion removes useful farmland, adds to sediment loads, and can help transport anthropogenic fertilizers into the river system, which leads to eutrophication. The Sediment Delivery Ratio (SDR) is fraction of gross erosion (interill, rill, gully and stream erosion) that is expected to be delivered to the outlet of the river. The sediment transfer and deposition can be modelled with sediment distribution models such as WaTEM/SEDEM. In Europe, according to WaTEM/SEDEM model estimates the Sediment Delivery Ratio is about 15%. Coastal development and sedimentation near coral reefs Watershed development near coral reefs is a primary cause of sediment-related coral stress. The stripping of natural vegetation in the watershed for development exposes soil to increased wind and rainfall and, as a result, can cause exposed sediment to become more susceptible to erosion and delivery to the marine environment during rainfall events. Sediment can negatively affect corals in many ways, such as by physically smothering them, abrading their surfaces, causing corals to expend energy during sediment removal, and causing algal blooms that can ultimately lead to less space on the seafloor where juvenile corals (polyps) can settle. When sediments are introduced into the coastal regions of the ocean, the proportion of land, marine, and organic-derived sediment that characterizes the seafloor near sources of sediment output is altered. In addition, because the source of sediment (i.e., land, ocean, or organically) is often correlated with how coarse or fine sediment grain sizes that characterize an area are on average, grain size distribution of sediment will shift according to the relative input of land (typically fine), marine (typically coarse), and organically-derived (variable with age) sediment. These alterations in marine sediment characterize the amount of sediment suspended in the water column at any given time and sediment-related coral stress. Biological considerations In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found. See also References Further reading Sedimentology Environmental soil science Petrology
Sediment
[ "Environmental_science" ]
2,128
[ "Environmental soil science" ]
60,359
https://en.wikipedia.org/wiki/Benthos
Benthos (), also known as benthon, is the community of organisms that live on, in, or near the bottom of a sea, river, lake, or stream, also known as the benthic zone. This community lives in or near marine or freshwater sedimentary environments, from tidal pools along the foreshore, out to the continental shelf, and then down to the abyssal depths. Many organisms adapted to deep-water pressure cannot survive in the upper parts of the water column. The pressure difference can be very significant (approximately one atmosphere for every 10 metres of water depth). Because light is absorbed before it can reach deep ocean water, the energy source for deep benthic ecosystems is often organic matter from higher up in the water column that drifts down to the depths. This dead and decaying matter sustains the benthic food chain; most organisms in the benthic zone are scavengers or detritivores. The term benthos, coined by Haeckel in 1891, comes from the Greek noun 'depth of the sea'. Benthos is used in freshwater biology to refer to organisms at the bottom of freshwater bodies of water, such as lakes, rivers, and streams. There is also a redundant synonym, Benton. Overview Compared to the relatively featureless pelagic zone, the benthic zone offers physically diverse habitats. There is a huge range in how much light and warmth is available, and in the depth of water or extent of intertidal immersion. The seafloor varies widely in the types of sediment it offers. Burrowing animals can find protection and food in soft, loose sediments such as mud, clay and sand. Sessile species such as oysters and barnacles can attach themselves securely to hard, rocky substrates. As adults they can remain at the same site, shaping depressions and crevices where mobile animals find refuge. This greater diversity in benthic habitats has resulted in a higher diversity of benthic species. The number of benthic animal species exceeds one million. This far exceeds the number of pelagic animal species (about 5000 larger zooplankton species, 22,000 pelagic fish species and 110 marine mammal species). By size Macrobenthos Macrobenthos, prefix , comprises the larger, visible to the naked eye, benthic organisms greater than about 1 mm in size. In shallow waters, seagrass meadows, coral reefs and kelp forests provide particularly rich habitats for macrobenthos. Some examples are polychaete worms, bivalves, echinoderms, sea anemones, corals, sponges, sea squirts, turbellarians and larger crustaceans such as crabs, lobsters and cumaceans. Meiobenthos Meiobenthos, prefix , comprises tiny benthic organisms that are less than about 1 mm but greater than about 0.1 mm in size. Some examples are nematodes, foraminiferans, tardigrades, gastrotriches and smaller crustaceans such as copepods and ostracodes. Microbenthos Microbenthos, prefix from the Greek mikrós 'small', comprises microscopic benthic organisms that are less than about 0.1 mm in size. Some examples are bacteria, diatoms, ciliates, amoeba, flagellates. Marine microbenthos are microorganisms that live in the benthic zone of the ocean – that live near or on the seafloor, or within or on surface seafloor sediments. Microbenthos are found everywhere on or about the seafloor of continental shelves, as well as in deeper waters, with greater diversity in or on seafloor sediments. In photic zones benthic diatoms dominate as photosynthetic organisms. In intertidal zones changing tides strongly control opportunities for microbenthos. Both foraminifera and diatoms have planktonic and benthic forms, that is, they can drift in the water column or live on sediment at the bottom of the ocean. Regardless of form, their shells sink to the seafloor after they die. These shells are widely used as climate proxies. The chemical composition of the shells are a consequence of the chemical composition of the ocean at the time the shells were formed. Past water temperatures can be also be inferred from the ratios of stable oxygen isotopes in the shells, since lighter isotopes evaporate more readily in warmer water leaving the heavier isotopes in the shells. Information about past climates can be inferred further from the abundance of forams and diatoms, since they tend to be more abundant in warm water.The sudden extinction event which killed the dinosaurs 66 million years ago also rendered extinct three-quarters of all other animal and plant species. However, deep-sea benthic forams flourished in the aftermath. In 2020 it was reported that researchers have examined the chemical composition of thousands of samples of these benthic forams and used their findings to build the most detailed climate record of Earth ever. Some endoliths have extremely long lives. In 2013 researchers reported evidence of endoliths in the ocean floor, perhaps millions of years old, with a generation time of 10,000 years. These are slowly metabolizing and not in a dormant state. Some Actinomycetota found in Siberia are estimated to be half a million years old. By type Zoobenthos Zoobenthos, prefix , animals belonging to the benthos. Examples include polychaete worms, starfish and anemones. Phytobenthos Phytobenthos, prefix , plants belonging to the benthos, mainly benthic diatoms and macroalgae (seaweed). By location Endobenthos Endobenthos (or endobenthic), prefix , lives buried, or burrowing in the sediment, often in the oxygenated top layer, e.g., a sea pen or a sand dollar. Epibenthos Epibenthos (or epibenthic), prefix , lives on top of the sediments, e.g., sea cucumber or a sea snail. Hyperbenthos Hyperbenthos (or hyperbenthic), prefix , lives just above the sediment, e.g., a rock cod. Food sources The main food sources for the benthos are phytoplankton and organic detrital matter. In coastal locations, organic run off from land provides an additional food source. Meiofauna and bacteria consume and recycle organic matter in the sediments, playing an important role in returning nitrate and phosphate to the pelagic. The depth of water, temperature and salinity, and type of local substrate all affect what benthos is present. In coastal waters and other places where light reaches the bottom, benthic photosynthesizing diatoms can proliferate. Filter feeders, such as sponges and bivalves, dominate hard, sandy bottoms. Deposit feeders, such as polychaetes, populate softer bottoms. Fish, such as dragonets, as well as sea stars, snails, cephalopods, and crustaceans are important predators and scavengers. Benthic organisms, such as sea stars, oysters, clams, sea cucumbers, brittle stars and sea anemones, play an important role as a food source for fish, such as the California sheephead, and humans. Ecological role Benthos as bioindicators Benthic macro-invertebrates play a critical role in aquatic ecosystems. These organisms can be used to indicate the presence, concentration, and effect of water pollutants in the aquatic environment. Some water contaminants—such as nutrients, chemicals from surface runoff, and metals—settle in the sediment of river beds, where many benthos reside. Benthos are highly sensitive to contamination, so their close proximity to high pollutant concentrations make these organisms ideal for studying water contamination. Benthos can be used as bioindicators of water pollution through ecological population assessments or through analyzing biomarkers. In ecological population assessments, a relative value of water pollution can be detected. Observing the number and diversity of macro-invertebrates in a waterbody can indicate the pollution level. In highly contaminated waters, a reduced number of organisms and only pollution-tolerant species will be found. In biomarker assessments, quantitative data can be collected on the amount of and direct effect of specific pollutants in a waterbody. The biochemical response of macro-invertebrates' internal tissues can be studied extensively in the laboratory. The concentration of a chemical can cause many changes, including changing feeding behaviors, inflammation, and genetic damage, effects that can be detected outside of the stream environment. Biomarker analysis is important for mitigating the negative impacts of water pollution because it can detect water pollution before it has a noticeable ecological effect on benthos populations. Carbon processing Organic matter produced in the sunlit layer of the ocean and delivered to the sediments is either consumed by organisms or buried. The organic matter consumed by organisms is used to synthesize biomass (i.e. growth) converted to carbon dioxide through respiration, or returned to the sediment as faeces. This cycle can occur many times before either all organic matter is used up or eventually buried. This process is known as the biological pump. In the long-term or at steady-state, i.e., the biomass of benthic organisms does not change, the benthic community can be considered a black box diverting organic matter into either metabolites or the geosphere (burial). The macrobenthos also indirectly impacts carbon cycling on the seafloor through bioturbation. Threats Benthos are negatively impacted by fishing, pollution and litter, deep-sea mining, oil and gas activities, tourism, shipping, invasive species, climate change (and its impacts such as ocean acidification, ocean warming and changes to ocean circulation) and construction such as coastal development, undersea cables, and wind farm construction. See also Aphotic zone Benthic fish Benthopelagic fish Bioirrigation Bottom feeder Deep sea Deep sea communities Deep sea mining Demersal fish Epibenthic sled Intertidal ecology Littoral Neritic zone Nekton Plankton Pelagic zone Photic zone Profundal zone Sediment Profile Imagery (SPI) Stream bed Notes References "Benthos". (2008) Encyclopædia Britannica. (Retrieved May 15, 2008, from Encyclopædia Britannica Online.) Ryan, Paddy (2007) "Benthic communities" Te Ara - the Encyclopædia of New Zealand, updated 21 September 2007. Yip, Maricela and Madl, Pierre (1999) "Benthos" University of Salzburg. External links "Benthos" Marine organisms Ecology terminology Oceanographical terminology
Benthos
[ "Biology" ]
2,248
[ "Ecology terminology" ]
60,360
https://en.wikipedia.org/wiki/Anapsid
An anapsid is an amniote whose skull lacks one or more skull openings (fenestra, or fossae) near the temples. Traditionally, the Anapsida are considered the most primitive subclass of amniotes, the ancestral stock from which Synapsida and Diapsida evolved, making anapsids paraphyletic. It is, however, doubtful that all anapsids lack temporal fenestra as a primitive trait, and that all the groups traditionally seen as anapsids truly lacked fenestra. Anapsids and the turtles While "anapsid reptiles" or "Anapsida" were traditionally spoken of as if they were a monophyletic group, it has been suggested that several groups of reptiles that had anapsid skulls might be only distantly related. Scientists still debate the exact relationship between the basal (original) reptiles that first appeared in the late Carboniferous, the various Permian reptiles that had anapsid skulls, and the Testudines (turtles, tortoises, and terrapins). However, it was later suggested that the anapsid-like turtle skull is due to reversion rather than to anapsid descent. The majority of modern paleontologists believe that the Testudines are descended from diapsid reptiles that lost their temporal fenestrae. More recent morphological phylogenetic studies with this in mind placed turtles firmly within diapsids, or, more commonly, within Archelosauria. Phylogenetic position of turtles All molecular studies have strongly upheld the placement of turtles within diapsids; some place turtles within Archosauria, or, more commonly, as a sister group to extant archosaurs. One molecular study, published in 2012, suggests that turtles are lepidosauromorph diapsids, most closely related to the lepidosaurs (lizards, snakes, and tuataras). However, in a later paper from the same authors, published in 2014, based on more extensive data, the archosauromorph hypothesis is supported. Reanalysis of prior phylogenies suggests that they classified turtles as anapsids both because they assumed this classification (most of them were studying what sort of anapsid turtles are) and because they did not sample fossil and extant taxa broadly enough for constructing the cladogram. Testudines is suggested to have diverged from other diapsids between 200 and 279 million years ago, though the debate is far from settled. Although procolophonids managed to survive into the Triassic, most of the other reptiles with anapsid skulls, including the millerettids, nycteroleterids, and pareiasaurs, became extinct in the Late Permian period by the Permian-Triassic extinction event. Despite the molecular studies, there is evidence that contradicts their classification as diapsids. All known diapsids excrete uric acid as nitrogenous waste (uricotelic), and there is no known case of a diapsid reverting to the excretion of urea (ureotelism), even when they return to semi-aquatic lifestyles. Crocodilians, for example, are still uricotelic, although they are also partly ammonotelic, meaning they excrete some of their waste as ammonia. Ureotelism appears to be the ancestral condition among primitive amniotes, and it is retained by mammals, which likely inherited ureotelism from their synapsid and therapsid ancestors. Ureotelism therefore would suggest that turtles were more likely anapsids than diapsids. The only known uricotelic chelonian is the desert tortoise, which likely evolved it recently as adaptation to desert habitats. Some desert mammals are also uricotelic, so since practically all known mammals are ureotelic, uricotelic adaptation is a likely result of convergence among desert species. Therefore, turtles would have to be the only known case of a uricotelic reptile reverting to ureotelism. Anapsida in modern taxonomy Anapsida is still sporadically recognized as a valid group, but is not favoured by current workers. Anapsids in the traditional meaning of the word are not a clade, but rather a paraphyletic group composed of all the early reptiles retaining the primitive skull morphology, grouped together by the absence of temporal openings. Gauthier, Kluge and Rowe (1988) attempted to redefine Anapsida so it would be monophyletic, defining it as the clade containing "extant turtles and all other extinct taxa that are more closely related to them than they are to other reptiles". This definition explicitly includes turtles in Anapsida; because the phylogenetic placement of turtles within Amniota is very uncertain, it is unclear what taxa, other than turtles themselves, would be included in such defined Anapsida, and whether its content would be similar to the Anapsida of tradition. Indeed, Gauthier, Kluge and Rowe (1988) themselves included only turtles and Captorhinidae in their Anapsida, while excluding the majority of anapsids in the traditional sense of the word from it. Temporal openings in traditional anapsids Tsuji and Müller (2009) noted that the name Anapsida implies a morphology (lack of temporal openings) that is in fact absent in the skeletons of a number of taxa traditionally included in the group. A temporal opening in the skull roof behind each eye, similar to that present in the skulls of synapsids, has been discovered in the skulls of a number of members of Parareptilia (the group containing most of reptiles traditionally referred to as anapsids), including lanthanosuchoids, millerettids, bolosaurids, some nycteroleterids, some procolophonoids and at least some mesosaurs. The presence of temporal openings in the skulls of these taxa makes it uncertain whether the ancestral reptiles had an anapsid-like skull as traditionally assumed or a synapsid-like skull instead. See also Euryapsida References External links Introduction to Anapsida from UCMP Reptile taxonomy Paraphyletic groups
Anapsid
[ "Biology" ]
1,283
[ "Phylogenetics", "Paraphyletic groups" ]
60,373
https://en.wikipedia.org/wiki/8-bit%20clean
8-bit clean is an attribute of computer systems, communication channels, and other devices and software, that process 8-bit character encodings without treating any byte as an in-band control code. History Until the early 1990s, many programs and data transmission channels were character-oriented and treated some characters, e.g., ETX, as control characters. Others assumed a stream of seven-bit characters, with values between 0 and 127; for example, the ASCII standard used only seven bits per character, avoiding an 8-bit representation in order to save on data transmission costs. On computers and data links using 8-bit bytes, this left the top bit of each byte free for use as a parity, flag bit, or metadata control bit. 7-bit systems and data links are unable to directly handle more complex character codes which are commonplace in non-English-speaking countries with larger alphabets. Binary files of octets cannot be transmitted through 7-bit data channels directly. To work around this, binary-to-text encodings have been devised which use only 7-bit ASCII characters. Some of these encodings are uuencoding, Ascii85, SREC, BinHex, kermit and MIME's Base64. EBCDIC-based systems cannot handle all characters used in UUencoded data. However, the base64 encoding does not have this problem. SMTP and NNTP 8-bit cleanness Historically, various media were used to transfer messages, some of them only supporting 7-bit data, so an 8-bit message had high chances to be garbled during transmission in the 20th century. But some implementations really did not care about formal discouraging of 8-bit data and allowed high bit set bytes to pass through. Such implementations are said to be 8-bit clean. In general, a communications protocol is said to be 8-bit clean if it correctly passes through the high bit of each byte in the communication process. Many early communications protocol standards, such as (for SMTP), (for NNTP) and , were designed to work over such "7-bit" communication links. They specifically require the use of ASCII character set "transmitted as an 8-bit byte with the high-order bit cleared to zero" and some of these explicitly restrict all data to 7-bit characters. For the first few decades of email networks (1971 to the early 1990s), most email messages were plain text in the 7-bit US-ASCII character set. The definition of SMTP, like its predecessor , limits Internet Mail to lines (1000 characters or less) of 7-bit US-ASCII characters. Later, the format of email messages was redefined in order to support messages that are not entirely US-ASCII text (text messages in character sets other than US-ASCII, and non-text messages, such as audio and images). The header field Content-Transfer-Encoding=binary requires an 8-bit clean transport. specifies that "NNTP operates over any reliable bi-directional 8-bit-wide data stream channel", and changes the character set for commands to UTF-8. However, still limits the character set to ASCII, including and MIME encoding of non-ASCII data. The Internet community generally adds features by extension, allowing communication in both directions between upgraded machines and not-yet-upgraded machines, rather than declaring formerly standards-compliant legacy software to be "broken" and insisting that all software worldwide be upgraded to the latest standard. The recommended way to take advantage of 8-bit-clean links between machines is to use the ESMTP () 8BITMIME extension for message bodies and the SMTP SMTPUTF8 extension for message headers. Despite this, some mail transfer agents, notably Exim and qmail, relay mail to servers that do not advertise 8BITMIME without performing the conversion to 7-bit MIME (typically quoted-printable, "Q-P conversion") required by . This "just-send-8" attitude does not, in fact, cause problems in practice because virtually all modern email servers are 8-bit clean. See also 32-bit clean Notes References Character encoding
8-bit clean
[ "Technology" ]
887
[ "Natural language and computing", "Character encoding" ]
14,714,930
https://en.wikipedia.org/wiki/Halo%20occupation%20distribution
The halo occupation distribution (HOD) is a parameter of the halo model of galaxy clustering. The halo model provides one view of the large scale structure of the universe as clumps of dark matter, while the HOD provides a view of how galactic matter is distributed within each of the dark matter clumps. The HOD is used to describe three related properties of the halo model: the probability distribution relating the mass of a dark matter halo to the number of galaxies that form within that halo; the distribution in space of galactic matter within a dark matter halo; the distribution of velocities of galactic matter relative to dark matter within a dark matter halo. See also Dark matter Large-scale structure of the cosmos Galaxy formation and evolution References Galaxies Large-scale structure of the cosmos
Halo occupation distribution
[ "Astronomy" ]
161
[ "Galaxies", "Astronomical objects" ]
14,715,074
https://en.wikipedia.org/wiki/Metallised%20film
Metallised films (or metallized films) are polymer films coated with a thin layer of metal, usually aluminium. They offer the glossy metallic appearance of an aluminium foil at a reduced weight and cost. Metallised films are widely used for decorative purposes and food packaging, and also for specialty applications including insulation and electronics. Manufacture Metallisation is performed using a physical vapor deposition process. Aluminium is the most common metal used for deposition, but other metals such as nickel and chromium are also used. The metal is heated and evaporated under vacuum. This condenses on the cold polymer film, which is unwound near the metal vapour source. This coating is much thinner than a metal foil could be made, in the range of 0.5 micrometres. This coating will not fade or discolour over time. While oriented polypropylene and polyethylene terephthalate (PET) are the most common films used for metallisation, nylon, polyethylene and cast polypropylene are also used. Properties Metallised films have a reflective silvery surface similar to aluminium foil and are highly flammable. The coating also reduces the permeability of the film to light, water and oxygen. The properties of the film remain, such as higher toughness, the ability to be heat sealed, and a lower density at a lower cost than an aluminium foil. This gives metallised films some advantages over aluminium foil and aluminium foil laminates. It was once thought that metallised films would become a replacement for aluminium foil laminates, but current films still cannot match the barrier properties of foil. Some very high barrier metallised films are available using EVOH, but are not yet cost effective against foil laminates. Uses Decoration Metallised films were first used for decorative purposes as Christmas tinsel, and continue to be used for items such as wrappers, ribbons, and glitter. Metallic helium-filled novelty balloons given as gifts are made of metallised BoPET and often called Mylar balloons commercially. Packaging Both metallised PET and PP have replaced foil laminates for products such as snack foods, coffee and candy, which do not require the superior barrier of aluminium foil. Metallised nylon and polyethylene are used in the meat export market. The controlled permeation extends shelf life. Metallised films are used as a susceptor for cooking in microwave ovens. An example is a microwave popcorn bag. Many food items are also packaged using metallised films for appearance only, as these produce a package with greater sparkle when compared to competing products that use printed paper or polymer films. Insulation Metallised PET films are used in NASA spacesuits to reflect heat radiation, keeping astronauts warm, and in ″proximity suits″ used by firefighters for protection from the high amount of heat released from fuel fires. Aluminized emergency blankets ("space blankets") are also used to conserve a shock victim's body heat. MPET has been used as an antistatic container for other heat and sound insulating materials used in aircraft, to prevent the insulation from leaking into the passenger cabin, but is not itself the insulator in that use. Burning MPET insulation was identified as a cause of the crash of Swissair Flight 111 in 1998 that killed 229 people, leading to new recommendations on its use in airliners. Electronics Metallised films are used as a dielectric in the manufacture of a type of capacitor used in electronic circuits, and as a material in some types of antistatic bags. See also Carbon dioxide transmission rate Cutting stock problem Insulated shipping container Moisture vapour transmission rate Permeation Popcorn bag Oxygen transmission rate Biaxially-oriented PET film Sputter deposition Vacuum deposition References Further reading Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Packaging materials Plastics
Metallised film
[ "Physics" ]
820
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
14,715,254
https://en.wikipedia.org/wiki/Thayer%E2%80%93Martin%20agar
Thayer–Martin agar (or Thayer–Martin medium, or VPN agar) is a Mueller–Hinton agar with 5% chocolate sheep blood and antibiotics. It is used for culturing and primarily isolating pathogenic Neisseria bacteria, including Neisseria gonorrhoeae and Neisseria meningitidis, as the medium inhibits the growth of most other microorganisms. When growing Neisseria meningitidis, one usually starts with a normally sterile body fluid (blood or CSF), so a plain chocolate agar is used. Thayer–Martin agar was initially developed in 1964, with an improved formulation published in 1966. Components It usually contains the following combination of antibiotics, which make up the VPN acronym: Vancomycin, which is able to kill most Gram-positive organisms, although some Gram-positive organisms such as Lactobacillus and Pediococcus are intrinsically resistant Polymyxin, also known as colistin, which is added to kill most Gram-negative organisms except Neisseria, although some other Gram-negative organisms such as Legionella are also resistant Nystatin, which can kill most fungi Trimethoprim inhibits swarming of Proteus spp Clinical implications A negative culture on Thayer–Martin in a patient exhibiting symptoms of pelvic inflammatory disease most likely indicates an infection with Chlamydia trachomatis. References Microbiological media
Thayer–Martin agar
[ "Biology" ]
299
[ "Microbiological media", "Microbiology equipment" ]
14,716,135
https://en.wikipedia.org/wiki/Ubiquitous%20robot
Ubiquitous robot is a term used in an analogous way to ubiquitous computing. Software useful for "integrating robotic technologies with technologies from the fields of ubiquitous and pervasive computing, sensor networks, and ambient intelligence". The emergence of mobile phone, wearable computers and ubiquitous computing makes it likely that human beings will live in a ubiquitous world in which all devices are fully networked. The existence of ubiquitous space resulting from developments in computer and network technology will provide motivations to offer desired services by any IT device at any place and time through user interactions and seamless applications. This shift has hastened the ubiquitous revolution, which has further manifested itself in the new multidisciplinary research area, ubiquitous robotics. It initiates the third generation of robotics following the first generation of the industrial robot and the second generation of the personal robot. Ubiquitous robot (Ubibot) is a robot incorporating three components including virtual software robot or avatar, real-world mobile robot and embedded sensor system in surroundings. Software robot within a virtual world can control a real-world robot as a brain and interact with human beings. Researchers of KAIST, Korea describe these three components as a Sobot (Software robot), Mobot (Mobile robot), and Embot (Embedded robot). See also Cloud robotics Internet of things Related Technical literature Tae-Hun Kim, Seung-Hwan Choi, and Jong-Hwan Kim. "Incorporation of a Software Robot and a Mobile Robot Using a Middle Layer." IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, Vol. 37, No. 6, Nov. 2007. Jong-Hwan Kim et al., "Ubiquitous Robot: A New Paradigm for Integrated Services", in Proc. of IEEE Int’l Conf. on Robotics and Automation, Rome, Italy, April 2007. Jong-Hwan Kim, "Ubiquitous Robot: Recent Progress and Development", (Keynote Speech Paper) in SICE-ICASE International Joint Conference 2006, Busan, Korea, pp. I-25 - I-30, Oct. 2006. Jong-Hwan Kim et al., "The 3rd Generation of Robotics: Ubiquitous Robot", (Keynote Speech Paper) in Proc. of the International Conference on Autonomous Robots and Agents, Palmerston North, New Zealand, 2004. Jong-Hwan Kim, "Ubiquitous Robot", in Computational Intelligence, Theory and Applications (edited by B. Reusch), in Springer, pp. 451–459, 2004 (Keynote Speech Paper of the 8th Fuzzy Days International Conference, Dortmund, Germany, Sep. 2004). -Ubiquitous Assistive technology Ambient intelligence
Ubiquitous robot
[ "Physics", "Technology", "Engineering" ]
544
[ "Machines", "Robotics engineering", "Robots", "Physical systems", "Computing and society", "Robotics software", "Ambient intelligence" ]
14,716,807
https://en.wikipedia.org/wiki/Daan%20Frenkel
Daan Frenkel (born 1948, Amsterdam) is a Dutch computational physicist in the Department of Chemistry at the University of Cambridge. Education Frenkel completed his PhD at the University of Amsterdam in 1977 in experimental physical chemistry. Career and research Frenkel worked as postdoctoral research fellow in the Chemistry and Biochemistry Department at the University of California, Los Angeles (UCLA), subsequently at Shell and at the University of Utrecht. Between 1987 and 2007, Frenkel carried out his research at the FOM Institute for Atomic and Molecular Physics (AMOLF) in Amsterdam where he has been employed since 1987. In the same period, he was appointed (part-time) professor at the Universities of Utrecht and Amsterdam. From 2011 to 2015 he was Head of the Department of Chemistry at the University of Cambridge. Since 2007 he is a Professor of Chemistry at the University of Cambridge. Frenkel has co-authored together with Berend Smit Understanding Molecular Simulation, which has grown into a handbook used worldwide by aspiring computational physicists. Awards and honours In 2000 he was one of three winners of the Dutch Spinoza Prize. In 2008 he was appointed a Fellow of Trinity College, Cambridge. He is a member of the Royal Netherlands Academy of Arts and Sciences (1998), the American Academy of Arts and Sciences (2008), and The World Academy of Sciences (TWAS) in 2012. He was elected a Foreign Member of the Royal Society (ForMemRS) in 2006. In 2016 he was elected as a foreign associate of the National Academy of Sciences. In 2007 he received the Aneesur Rahman Prize from the American Physical Society (APS) and the Berni J Alder CECAM prize. In 2010 he received the Soft Matter and Biophysical Chemistry Award from the Royal Society of Chemistry (RSC), UK. He received the 2016 Boltzmann Medal and the 2022 Lorentz Medal. Asteroid 12651 Frenkel, discovered by astronomers during the third Palomar–Leiden trojan survey in 1977, was named in his honor in 2018. References 1948 births Living people Computational physicists Dutch physical chemists 21st-century Dutch chemists 20th-century Dutch physicists Fellows of Trinity College, Cambridge Foreign members of the Royal Society Members of the Royal Netherlands Academy of Arts and Sciences Foreign associates of the National Academy of Sciences Members of the University of Cambridge Department of Chemistry Scientists from Amsterdam Spinoza Prize winners University of Amsterdam alumni Academic staff of Utrecht University 21st-century Dutch physicists
Daan Frenkel
[ "Physics" ]
501
[ "Computational physicists", "Computational physics" ]
14,717,810
https://en.wikipedia.org/wiki/Gas%20Turbine%20Research%20Establishment
Gas Turbine Research Establishment (GTRE) is a laboratory of the Defence Research and Development Organisation (DRDO). Located in Bengaluru, its primary function is research and development of aero gas-turbines for military aircraft. As a spin-off effect, GTRE has been developing marine gas-turbines also. It was initially known as GTRC (Gas Turbine Research Centre), created in 1959 in No.4 BRD Air Force Station, Kanpur, Uttar Pradesh. In November 1961 it was brought under DRDO, renamed to GTRE and moved to Bengaluru, Karnataka. GTRE has consistently faced critcism for failing to develop an indigenous jet engine for fighter aircraft. Products Principal achievements of Gas Turbine Research Establishment include: Design and development of India's "first centrifugal type 10 kN thrust engine" between 1959-61. Design and development of a "1700K reheat system" for the Orpheus 703 engine to boost its power. The redesigned system was certified in 1973. Successful upgrade of the reheat system of the Orpheus 703 to 2000K. Improvement of the Orpheus 703 engine by replacing "the front subsonic compressor stage" with a "transonic compressor stage" to increase the "basic dry thrust" of the engine. Design and development of a "demonstrator" gas turbine engine—GTX 37-14U—for fighter aircraft. Performance trials commenced in 1977 and the "demonstrator phase" was completed in 1981. The GTX 37-14U was "configured" and "optimized" to build a "low by-pass ratio jet engine" for "multirole performance aircraft". This engine was dubbed GTX 37-14U B. GTX Kaveri GTX-35VS Kaveri engine was intended to power production models of HAL Tejas. Defending the program GTRE mentioned reasons for delay including: Non availability of state of the art wind tunnel facility in India The technology restrictions imposed by US by placing it in "entities" list Both hurdles having been cleared, GTRE intended to continue work on the AMCA (future generation fighter craft). This program was abandoned in 2014. Kaveri Marine Gas Turbine (KMGT) Kaveri Marine Gas Turbine is a design spin-off from the Kaveri engine, designed for Indian combat aircraft. Using the core of the Kaveri engine, GTRE added low-pressure compressor and turbine as a gas generator and designed a free power turbine to generate shaft power for maritime applications. The involvement of Indian Navy in the development and testing of the engine has given a tremendous boost to the programme. The base frame for KMGT was developed by private player Larsen & Toubro (L&T). Ghatak engine The engine for DRDO Ghatak will be a 52-kilonewton dry variant of the Kaveri aerospace engine and will be used in the UCAV (Unmanned Combat Aerial Vehicles). The Government of India has cleared a funding of ₹2,650 crores ($394 Million) for the project. Manik Engine Small Turbofan Engine (STFE), also known as Manik engine is a 4.5 kN thrust turbofan engine developed by GTRE to power Nirbhay series cruise missile and under development UAVs, Long range Anti-ship and Land Attack cruise missile systems. In October 2022, STFE was successfully flight tested. DRDO is currently on search for a private production partner to mass produce Manik engine. It is estimated that 300 units will be produced over the course of five years. This amount could be allocated to the GTRE-identified industries. An Expression of Interest (EOI) will first identify two industries to supply three engines each over the course of eighteen months. After that, an RFI for mass production quantities will be issued. In April 2024, the DRDO designed Indigenous Technology Cruise Missile (ITCM), which incorporates the Manik engine, was successfully tested. In July 2024, ABI Showatech India Pvt Ltd was awarded the contract to supply Casting Vane Low-Pressure Turbine (LPNGV) subcomponent of the engine as a part of the cruise missile programme. The low pressure turbine is "responsible for extracting energy from the exhaust gases to drive the fan and other compressor stages." The current STFE production plant is located near Thiruvananthapuram International Airport in Kerala for Limited Series Production for testing purpose of Nirbhay cruise missile. Testing The KMGT was tested on the Marine Gas Turbine test bed, an Indian Navy facility at Vishakhapatnam. The engine has been tested to its potential of 12 MW at ISA SL 35 °C condition, a requirement of the Navy to propel SNF class ships, such as the Rajput class destroyers. Manufacturing The Ministry of Defence (MoD) has awarded Azad Engineering Limited a contract to serve as a production agency for engines designed by the Gas Turbine Research Establishment. Assembling and manufacturing what is known as an Advanced Turbo Gas Generator (ATGG) engine is the focus of the present long-term contract. This is meant to power various defense applications, such as the gas turbine engine that powers the Indian Army's fleet of infantry combat vehicles (ICVs) and tanks, the marine gas turbine engine (MGTE) for upcoming Indian Navy warships, and the GTX-35VS Kaveri turbofan engine for the Tejas fighter. By early 2026, Azad must begin delivering its first batch of fully integrated engines. Using components including a 4-stage axial flow compressor, an annular combustor, a single-stage axial flow uncooled turbine, and a fixed exit area nozzle, the engine is built using a single-spool turbojet configuration. Azad Engineering will be essential to GTRE as a single source industry partner. In 2024, discussions began between Safran, a French defence and aerospace company, and DRDO's Aeronautical Development Agency and GTRE for future technology transfer and manufacturing of jet engines for India's 5th generation Advanced Medium Combat Aircraft (AMCA) programme. Industry collaboration For Combat Aircraft Engine Development Program, PTC Industries Limited, a Titanium recycling and aerospace component forging company has taken up a developmental contract for essential components on 6 December 2022. GTRE is expanding PTC Industries' capacity to produce vital titanium alloy aero engine and aircraft parts through investment casting – hot isostatic pressing technology. In cooperation with GTRE, a prototype of the Engine Bevel Pinion Housing has already been developed. Jet engine development criticism GTRE has been frequently criticised for its failure to develop an indigenous jet engine for fighter aircraft, a project the laboratory has been working on since 1982. As of 2023, GTRE has not been able to overcome its engine development issues regarding metallurgy for turbine blades and other engine blade technologies, lack of a flying testbed and wind tunnel to validate engines above a 90 Kilo Newton (KN) thrust. References External links Gas Turbine Research Establishment Gas Turbine Research Establishment (GTRE) Defence Research and Development Organisation laboratories Research institutes in Bengaluru Engineering research institutes Aircraft engine manufacturers of India Gas turbine manufacturers Marine engine manufacturers Research institutes in Lucknow Engine manufacturers of India 1959 establishments in Mysore State
Gas Turbine Research Establishment
[ "Engineering" ]
1,490
[ "Engineering research institutes" ]
14,717,987
https://en.wikipedia.org/wiki/Global%20element
In category theory, a global element of an object A from a category is a morphism where is a terminal object of the category. Roughly speaking, global elements are a generalization of the notion of "elements" from the category of sets, and they can be used to import set-theoretic concepts into category theory. However, unlike a set, an object of a general category need not be determined by its global elements (not even up to isomorphism). For example, the terminal object of the category Grph of graph homomorphisms has one vertex and one edge, a self-loop, whence the global elements of a graph are its self-loops, conveying no information either about other kinds of edges, or about vertices having no self-loop, or about whether two self-loops share a vertex. In an elementary topos the global elements of the subobject classifier form a Heyting algebra when ordered by inclusion of the corresponding subobjects of the terminal object. For example, Grph happens to be a topos, whose subobject classifier is a two-vertex directed clique with an additional self-loop (so five edges, three of which are self-loops and hence the global elements of ). The internal logic of Grph is therefore based on the three-element Heyting algebra as its truth values. A well-pointed category is a category that has enough global elements to distinguish every two morphisms. That is, for each pair of distinct arrows in the category, there should exist a global element whose compositions with them are different from each other. References Objects (category theory)
Global element
[ "Mathematics" ]
337
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations" ]
14,718,234
https://en.wikipedia.org/wiki/Johnsen%E2%80%93Rahbek%20effect
The Johnsen–Rahbek effect occurs when an electric potential is applied across the boundary between a metallic surface and the surface of a semiconducting material or a polyelectrolyte. Under these conditions an attractive force appears, whose magnitude depends on the voltage and the specific materials involved. The attractive force is much larger than would be produced by Coulombic attraction. The effect is named after Danish engineers F. A. Johnsen and K. Rahbek, the first to investigate the effect at length. References External links "Edison's Loud-Speaking Telephone" Electrical engineering Classical mechanics
Johnsen–Rahbek effect
[ "Physics", "Engineering" ]
124
[ "Electrical engineering", "Classical mechanics stubs", "Mechanics", "Classical mechanics" ]
14,718,510
https://en.wikipedia.org/wiki/Communication%20Linking%20Protocol
Communication Linking Protocol (CLP) is a communications protocol used to communicate with many devices using the Motorola ReFLEX network. CLP allows a user to direct a ReFlex capable device to send or receive messages. CLP is used by Advantra's ReFLEX devices. Advantra's ReFLEX product line was purchased by Inilex who now manufactures the devices. External links ReFLEX White Paper Two Way Paging Article Artic Barran Device Karli Device Wirlki Device Radio paging Wireless networking
Communication Linking Protocol
[ "Technology", "Engineering" ]
102
[ "Wireless networking", "Computer networks engineering", "Radio paging" ]
14,718,550
https://en.wikipedia.org/wiki/Arie%20Rip
Arie Rip (born 13 June 1941, in Kethel en Spaland) is a Dutch professor emeritus of Philosophy of Science and Technology. Career During 1988–1989 he was the President of the international Society for Social Studies of Science. From 2000 until 2005 he was the head of WTMC, the Netherlands Graduate School of Science, Technology and Modern Culture. The WTMC is a formal collaboration of Dutch researchers studying the development of science, technology and modern culture. In 2006 Rip formally retired as the Professor of Philosophy of Science and Technology at the University of Twente, a position he held since 1987. He has published extensively on various topics concerning the philosophy and sociology of scientific and technological developments, and on science and innovation policy. Rip has, for example, introduced the widely used Constructive Technology Assessment method. Currently he is among others a Professor at the University of Stellenbosch in South Africa. Rip became chairman of the Society for the Study of Nanoscience and Emerging Technologies (S-NET) in 2008. In 2022 Rip received for his oeuvre the John Desmond Bernal Prize of the Society for Social Studies of Science (4S), which will be awarded to him in the second week of December, 2022, during a meeting of the Society at the Universidad Iberoamericana Puebla in San Andrés Cholula, Puebla, Mexico. Key publications Arie Rip (1981) Maatschappelijke Verantwoordelijkheid van Chemici, PhD-thesis Leiden University, Leiden Arie Rip (1994) The republic of science in the 1990s, Higher Education, Vol. 28, pp. 3–23 Arie Rip, Thomas Misa, and Johan Schot (eds.) (1995) Managing Technology in Society: The Approach of Constructive Technology Assessment, Pinter, London/New York. Johan Schot and Arie Rip (1996) The past and future of constructive technology assessment, Technological Forecasting and Social Change, Vol 54, pp. 251–268 Arie Rip (1997) A cognitive approach to the relevance of science, Social Science Information, Vol. 36 (4), pp. 615–640 Harro van Lente and Arie Rip (1998) The rise of membrane technology: from rhetorics to social reality, Social Studies of Science, Vol. 28 (2), pp. 221–254 René Kemp, Arie Rip and Johan Schot (2001) Constructing transition paths through the management of niches, In: Garud, R., Karnoe, P. (Eds.), Path Dependence and Creation, pp. 269–302 Arie Rip (2002) Science for the 21st century. In: Tindemans, P., Verrijn-Stuart, A., Visser, R. (Eds.), The Future of Science and the Humanities, Amsterdam University Press, Amsterdam, pp 99–148 Stefan Kuhlmann and Arie Rip (2018) Next-Generation Innovation Policy and Grand Challenges Science and public policy Vol. 45 (4); pp 448–454; https://doi.org/10.1093/scipol/scy011 References External links https://people.utwente.nl/a.rip http://www.wtmc.net 1941 births Living people Dutch social scientists Leiden University alumni Academic staff of the University of Twente People from Schiedam Science and technology studies scholars
Arie Rip
[ "Technology" ]
711
[ "Science and technology studies", "Science and technology studies scholars" ]