id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
174,908
https://en.wikipedia.org/wiki/Functional%20predicate
In formal logic and related branches of mathematics, a functional predicate, or function symbol, is a logical symbol that may be applied to an object term to produce another object term. Functional predicates are also sometimes called mappings, but that term has additional meanings in mathematics. In a model, a function symbol will be modelled by a function. Specifically, the symbol F in a formal language is a functional symbol if, given any symbol X representing an object in the language, F(X) is again a symbol representing an object in that language. In typed logic, F is a functional symbol with domain type T and codomain type U if, given any symbol X representing an object of type T, F(X) is a symbol representing an object of type U. One can similarly define function symbols of more than one variable, analogous to functions of more than one variable; a function symbol in zero variables is simply a constant symbol. Now consider a model of the formal language, with the types T and U modelled by sets [T] and [U] and each symbol X of type T modelled by an element [X] in [T]. Then F can be modelled by the set which is simply a function with domain [T] and codomain [U]. It is a requirement of a consistent model that [F(X)] = [F(Y)] whenever [X] = [Y]. Introducing new function symbols In a treatment of predicate logic that allows one to introduce new predicate symbols, one will also want to be able to introduce new function symbols. Given the function symbols F and G, one can introduce a new function symbol F ∘ G, the composition of F and G, satisfying (F ∘ G)(X) = F(G(X)), for all X. Of course, the right side of this equation doesn't make sense in typed logic unless the domain type of F matches the codomain type of G, so this is required for the composition to be defined. One also gets certain function symbols automatically. In untyped logic, there is an identity predicate id that satisfies id(X) = X for all X. In typed logic, given any type T, there is an identity predicate idT with domain and codomain type T; it satisfies idT(X) = X for all X of type T. Similarly, if T is a subtype of U, then there is an inclusion predicate of domain type T and codomain type U that satisfies the same equation; there are additional function symbols associated with other ways of constructing new types out of old ones. Additionally, one can define functional predicates after proving an appropriate theorem. (If you're working in a formal system that doesn't allow you to introduce new symbols after proving theorems, then you will have to use relation symbols to get around this, as in the next section.) Specifically, if you can prove that for every X (or every X of a certain type), there exists a unique Y satisfying some condition P, then you can introduce a function symbol F to indicate this. Note that P will itself be a relational predicate involving both X and Y. So if there is such a predicate P and a theorem: For all X of type T, for some unique Y of type U, P(X,Y), then you can introduce a function symbol F of domain type T and codomain type U that satisfies: For all X of type T, for all Y of type U, P(X,Y) if and only if Y = F(X). Doing without functional predicates Many treatments of predicate logic don't allow functional predicates, only relational predicates. This is useful, for example, in the context of proving metalogical theorems (such as Gödel's incompleteness theorems), where one doesn't want to allow the introduction of new functional symbols (nor any other new symbols, for that matter). But there is a method of replacing functional symbols with relational symbols wherever the former may occur; furthermore, this is algorithmic and thus suitable for applying most metalogical theorems to the result. Specifically, if F has domain type T and codomain type U, then it can be replaced with a predicate P of type (T,U). Intuitively, P(X,Y) means F(X) = Y. Then whenever F(X) would appear in a statement, you can replace it with a new symbol Y of type U and include another statement P(X,Y). To be able to make the same deductions, you need an additional proposition: For all X of type T, for some unique Y of type U, P(X,Y). (Of course, this is the same proposition that had to be proven as a theorem before introducing a new function symbol in the previous section.) Because the elimination of functional predicates is both convenient for some purposes and possible, many treatments of formal logic do not deal explicitly with function symbols but instead use only relation symbols; another way to think of this is that a functional predicate is a special kind of predicate, specifically one that satisfies the proposition above. This may seem to be a problem if you wish to specify a proposition schema that applies only to functional predicates F; how do you know ahead of time whether it satisfies that condition? To get an equivalent formulation of the schema, first replace anything of the form F(X) with a new variable Y. Then universally quantify over each Y immediately after the corresponding X is introduced (that is, after X is quantified over, or at the beginning of the statement if X is free), and guard the quantification with P(X,Y). Finally, make the entire statement a material consequence of the uniqueness condition for a functional predicate above. Let us take as an example the axiom schema of replacement in Zermelo–Fraenkel set theory. (This example uses mathematical symbols.) This schema states (in one form), for any functional predicate F in one variable: First, we must replace F(C) with some other variable D: Of course, this statement isn't correct; D must be quantified over just after C: We still must introduce P to guard this quantification: This is almost correct, but it applies to too many predicates; what we actually want is: This version of the axiom schema of replacement is now suitable for use in a formal language that doesn't allow the introduction of new function symbols. Alternatively, one may interpret the original statement as a statement in such a formal language; it was merely an abbreviation for the statement produced at the end. See also Function symbol (logic) Logical connective Logical constant Model theory
Functional predicate
[ "Mathematics" ]
1,434
[ "Mathematical logic", "Model theory" ]
174,914
https://en.wikipedia.org/wiki/Atomic%20units
The atomic units are a system of natural units of measurement that is especially convenient for calculations in atomic physics and related scientific fields, such as computational chemistry and atomic spectroscopy. They were originally suggested and named by the physicist Douglas Hartree. Atomic units are often abbreviated "a.u." or "au", not to be confused with similar abbreviations used for astronomical units, arbitrary units, and absorbance units in other contexts. Motivation In the context of atomic physics, using the atomic units system can be a convenient shortcut, eliminating symbols and numbers and reducing the order of magnitude of most numbers involved. For example, the Hamiltonian operator in the Schrödinger equation for the helium atom with standard quantities, such as when using SI units, is but adopting the convention associated with atomic units that transforms quantities into dimensionless equivalents, it becomes In this convention, the constants , , , and all correspond to the value (see below). The distances relevant to the physics expressed in SI units are naturally on the order of , while expressed in atomic units distances are on the order of (one Bohr radius, the atomic unit of length). An additional benefit of expressing quantities using atomic units is that their values calculated and reported in atomic units do not change when values of fundamental constants are revised, since the fundamental constants are built into the conversion factors between atomic units and SI. History Hartree defined units based on three physical constants: Here, the modern equivalent of is the Rydberg constant , of is the electron mass , of is the Bohr radius , and of is the reduced Planck constant . Hartree's expressions that contain differ from the modern form due to a change in the definition of , as explained below. In 1957, Bethe and Salpeter's book Quantum mechanics of one-and two-electron atoms built on Hartree's units, which they called atomic units abbreviated "a.u.". They chose to use , their unit of action and angular momentum in place of Hartree's length as the base units. They noted that the unit of length in this system is the radius of the first Bohr orbit and their velocity is the electron velocity in Bohr's model of the first orbit. In 1959, Shull and Hall advocated atomic units based on Hartree's model but again chose to use as the defining unit. They explicitly named the distance unit a "Bohr radius"; in addition, they wrote the unit of energy as and called it a Hartree. These terms came to be used widely in quantum chemistry. In 1973 McWeeny extended the system of Shull and Hall by adding permittivity in the form of as a defining or base unit. Simultaneously he adopted the SI definition of so that his expression for energy in atomic units is , matching the expression in the 8th SI brochure. Definition A set of base units in the atomic system as in one proposal are the electron rest mass, the magnitude of the electronic charge, the Planck constant, and the permittivity. In the atomic units system, each of these takes the value 1; the corresponding values in the International System of Units are given in the table. Table notes Units Three of the defining constants (reduced Planck constant, elementary charge, and electron rest mass) are atomic units themselves – of action, electric charge, and mass, respectively. Two named units are those of length (Bohr radius ) and energy (hartree ). Conventions Different conventions are adopted in the use of atomic units, which vary in presentation, formality and convenience. Explicit units Many texts (e.g. Jerrard & McNiell, Shull & Hall) define the atomic units as quantities, without a transformation of the equations in use. As such, they do not suggest treating either quantities as dimensionless or changing the form of any equations. This is consistent with expressing quantities in terms of dimensional quantities, where the atomic unit is included explicitly as a symbol (e.g. , , or more ambiguously, ), and keeping equations unaltered with explicit constants. Provision for choosing more convenient closely related quantities that are more suited to the problem as units than universal fixed units are is also suggested, for example based on the reduced mass of an electron, albeit with careful definition thereof where used (for example, a unit , where for a specified mass ). A convention that eliminates units In atomic physics, it is common to simplify mathematical expressions by a transformation of all quantities: Hartree suggested that expression in terms of atomic units allows us "to eliminate various universal constants from the equations", which amounts to informally suggesting a transformation of quantities and equations such that all quantities are replaced by corresponding dimensionless quantities. He does not elaborate beyond examples. McWeeny suggests that "... their adoption permits all the fundamental equations to be written in a dimensionless form in which constants such as , and are absent and need not be considered at all during mathematical derivations or the processes of numerical solution; the units in which any calculated quantity must appear are implicit in its physical dimensions and may be supplied at the end." He also states that "An alternative convention is to interpret the symbols as the numerical measures of the quantities they represent, referred to some specified system of units: in this case the equations contain only pure numbers or dimensionless variables; ... the appropriate units are supplied at the end of a calculation, by reference to the physical dimensions of the quantity calculated. [This] convention has much to recommend it and is tacitly accepted in atomic and molecular physics whenever atomic units are introduced, for example for convenience in computation." An informal approach is often taken, in which "equations are expressed in terms of atomic units simply by setting ". This is a form of shorthand for the more formal process of transformation between quantities that is suggested by others, such as McWeeny. Physical constants Dimensionless physical constants retain their values in any system of units. Of note is the fine-structure constant , which appears in expressions as a consequence of the choice of units. For example, the numeric value of the speed of light, expressed in atomic units, is Bohr model in atomic units Atomic units are chosen to reflect the properties of electrons in atoms, which is particularly clear in the classical Bohr model of the hydrogen atom for the bound electron in its ground state: Mass = 1 a.u. of mass Charge = −1 a.u. of charge Orbital radius = 1 a.u. of length Orbital velocity = 1 a.u. of velocity Orbital period = 2π a.u. of time Orbital angular velocity = 1 radian per a.u. of time Orbital momentum = 1 a.u. of momentum Ionization energy = a.u. of energy Electric field (due to nucleus) = 1 a.u. of electric field Lorentz force (due to nucleus) = 1 a.u. of force References Systems of units Natural units Atomic physics
Atomic units
[ "Physics", "Chemistry", "Mathematics" ]
1,426
[ "Systems of units", "Quantity", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Units of measurement", " and optical physics" ]
174,945
https://en.wikipedia.org/wiki/Elementary%20charge
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton (+1 e) or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . In the SI system of units, the value of the elementary charge is exactly defined as or 160.2176634 zeptocoulombs (zC). Since the 2019 revision of the SI, the seven SI base units are defined in terms of seven fundamental physical constants, of which the elementary charge is one. In the centimetre–gram–second system of units (CGS), the corresponding quantity is . Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865. As a unit In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron. In other natural unit systems, the unit of charge is defined as with the result that where is the fine-structure constant, is the speed of light, is the electric constant, and is the reduced Planck constant. Quantization Charge quantization is the principle that the charge of any object is an integer multiple of the elementary charge. Thus, an object's charge can be exactly 0 e, or exactly 1 e, −1 e, 2 e, etc., but not  e, or −3.8 e, etc. (There may be exceptions to this statement, depending on how "object" is defined; see below.) This is the reason for the terminology "elementary charge": it is meant to imply that it is an indivisible unit of charge. Fractional elementary charge There are two known sorts of exceptions to the indivisibility of the elementary charge: quarks and quasiparticles. Quarks, first posited in the 1960s, have quantized charge, but the charge is quantized into multiples of . However, quarks cannot be isolated; they exist only in groupings, and stable groupings of quarks (such as a proton, which consists of three quarks) all have charges that are integer multiples of e. For this reason, either 1 e or can be justifiably considered to be "the quantum of charge", depending on the context. This charge commensurability, "charge quantization", has partially motivated grand unified theories. Quasiparticles are not particles as such, but rather an emergent entity in a complex material system that behaves like a particle. In 1982 Robert Laughlin explained the fractional quantum Hall effect by postulating the existence of fractionally charged quasiparticles. This theory is now widely accepted, but this is not considered to be a violation of the principle of charge quantization, since quasiparticles are not elementary particles. Quantum of charge All known elementary particles, including quarks, have charges that are integer multiples of  e. Therefore, the "quantum of charge" is  e. In this case, one says that the "elementary charge" is three times as large as the "quantum of charge". On the other hand, all isolatable particles have charges that are integer multiples of e. (Quarks cannot be isolated: they exist only in collective states like protons that have total charges that are integer multiples of e.) Therefore, the "quantum of charge" is e, with the proviso that quarks are not to be included. In this case, "elementary charge" would be synonymous with the "quantum of charge". In fact, both terminologies are used. For this reason, phrases like "the quantum of charge" or "the indivisible unit of charge" can be ambiguous unless further specification is given. On the other hand, the term "elementary charge" is unambiguous: it refers to a quantity of charge equal to that of a proton. Lack of fractional charges Paul Dirac argued in 1931 that if magnetic monopoles exist, then electric charge must be quantized; however, it is unknown whether magnetic monopoles actually exist. It is currently unknown why isolatable particles are restricted to integer charges; much of the string theory landscape appears to admit fractional charges. Experimental measurements of the elementary charge The elementary charge is exactly defined since 20 May 2019 by the International System of Units. Prior to this change, the elementary charge was a measured quantity whose magnitude was determined experimentally. This section summarizes these historical experimental measurements. In terms of the Avogadro constant and Faraday constant If the Avogadro constant NA and the Faraday constant F are independently known, the value of the elementary charge can be deduced using the formula (In other words, the charge of one mole of electrons, divided by the number of electrons in a mole, equals the charge of a single electron.) This method is not how the most accurate values are measured today. Nevertheless, it is a legitimate and still quite accurate method, and experimental methodologies are described below. The value of the Avogadro constant NA was first approximated by Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas. Today the value of NA can be measured at very high accuracy by taking an extremely pure crystal (often silicon), measuring how far apart the atoms are spaced using X-ray diffraction or another method, and accurately measuring the density of the crystal. From this information, one can deduce the mass (m) of a single atom; and since the molar mass (M) is known, the number of atoms in a mole can be calculated: . The value of F can be measured directly using Faraday's laws of electrolysis. Faraday's laws of electrolysis are quantitative relationships based on the electrochemical researches published by Michael Faraday in 1834. In an electrolysis experiment, there is a one-to-one correspondence between the electrons passing through the anode-to-cathode wire and the ions that plate onto or off of the anode or cathode. Measuring the mass change of the anode or cathode, and the total charge passing through the wire (which can be measured as the time-integral of electric current), and also taking into account the molar mass of the ions, one can deduce F. The limit to the precision of the method is the measurement of F: the best experimental value has a relative uncertainty of 1.6 ppm, about thirty times higher than other modern methods of measuring or calculating the elementary charge. Oil-drop experiment A famous method for measuring e is Millikan's oil-drop experiment. A small drop of oil in an electric field would move at a rate that balanced the forces of gravity, viscosity (of traveling through the air), and electric force. The forces due to gravity and viscosity could be calculated based on the size and velocity of the oil drop, so electric force could be deduced. Since electric force, in turn, is the product of the electric charge and the known electric field, the electric charge of the oil drop could be accurately computed. By measuring the charges of many different oil drops, it can be seen that the charges are all integer multiples of a single small charge, namely e. The necessity of measuring the size of the oil droplets can be eliminated by using tiny plastic spheres of a uniform size. The force due to viscosity can be eliminated by adjusting the strength of the electric field so that the sphere hovers motionless. Shot noise Any electric current will be associated with noise from a variety of sources, one of which is shot noise. Shot noise exists because a current is not a smooth continual flow; instead, a current is made up of discrete electrons that pass by one at a time. By carefully analyzing the noise of a current, the charge of an electron can be calculated. This method, first proposed by Walter H. Schottky, can determine a value of e of which the accuracy is limited to a few percent. However, it was used in the first direct observation of Laughlin quasiparticles, implicated in the fractional quantum Hall effect. From the Josephson and von Klitzing constants Another accurate method for measuring the elementary charge is by inferring it from measurements of two effects in quantum mechanics: The Josephson effect, voltage oscillations that arise in certain superconducting structures; and the quantum Hall effect, a quantum effect of electrons at low temperatures, strong magnetic fields, and confinement into two dimensions. The Josephson constant is where h is the Planck constant. It can be measured directly using the Josephson effect. The von Klitzing constant is It can be measured directly using the quantum Hall effect. From these two constants, the elementary charge can be deduced: CODATA method The relation used by CODATA to determine elementary charge was: where h is the Planck constant, α is the fine-structure constant, μ0 is the magnetic constant, ε0 is the electric constant, and c is the speed of light. Presently this equation reflects a relation between ε0 and α, while all others are fixed values. Thus the relative standard uncertainties of both will be same. Tests of the universality of elementary charge See also Committee on Data of the International Science Council Notes References Further reading Fundamentals of Physics, 7th Ed., Halliday, Robert Resnick, and Jearl Walker. Wiley, 2005 Physical constants Units of electrical charge es:Carga eléctrica#Carga eléctrica elemental
Elementary charge
[ "Physics", "Mathematics" ]
2,207
[ "Physical quantities", "Electric charge", "Quantity", "Physical constants", "Units of electrical charge", "Units of measurement" ]
174,955
https://en.wikipedia.org/wiki/Bohr%20magneton
In atomic physics, the Bohr magneton (symbol ) is a physical constant and the natural unit for expressing the magnetic moment of an electron caused by its orbital or spin angular momentum. In SI units, the Bohr magneton is defined as and in the Gaussian CGS units as where is the elementary charge, is the reduced Planck constant, is the electron mass, is the speed of light. History The idea of elementary magnets is due to Walther Ritz (1907) and Pierre Weiss. Already before the Rutherford model of atomic structure, several theorists commented that the magneton should involve the Planck constant h. By postulating that the ratio of electron kinetic energy to orbital frequency should be equal to h, Richard Gans computed a value that was twice as large as the Bohr magneton in September 1911. At the First Solvay Conference in November that year, Paul Langevin obtained a value of . Langevin assumed that the attractive force was inversely proportional to distance to the power and specifically The Romanian physicist Ștefan Procopiu had obtained the expression for the magnetic moment of the electron in 1913. The value is sometimes referred to as the "Bohr–Procopiu magneton" in Romanian scientific literature. The Weiss magneton was experimentally derived in 1911 as a unit of magnetic moment equal to joules per tesla, which is about 20% of the Bohr magneton. In the summer of 1913, the values for the natural units of atomic angular momentum and magnetic moment were obtained by the Danish physicist Niels Bohr as a consequence of his atom model. In 1920, Wolfgang Pauli gave the Bohr magneton its name in an article where he contrasted it with the magneton of the experimentalists which he called the Weiss magneton. Theory A magnetic moment of an electron in an atom is composed of two components. First, the orbital motion of an electron around a nucleus generates a magnetic moment by Ampère's circuital law. Second, the inherent rotation, or spin, of the electron has a spin magnetic moment. In the Bohr model of the atom, for an electron that is in the orbit of lowest energy, its orbital angular momentum has magnitude equal to the reduced Planck constant, denoted ħ. The Bohr magneton is the magnitude of the magnetic dipole moment of an electron orbiting an atom with this angular momentum. The spin angular momentum of an electron is ħ, but the intrinsic electron magnetic moment caused by its spin is also approximately one Bohr magneton, which results in the electron spin g-factor, a factor relating spin angular momentum to corresponding magnetic moment of a particle, having a value of approximately 2. See also Anomalous magnetic moment Electron magnetic moment Bohr radius Nuclear magneton Parson magneton Physical constant Zeeman effect References Atomic physics Niels Bohr Physical constants Quantum magnetism Magnetic moment
Bohr magneton
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
581
[ "Physical quantities", "Quantity", "Quantum mechanics", "Magnetic moment", "Quantum magnetism", "Physical constants", " molecular", " and optical physics", "Atomic physics", "Atomic", "Condensed matter physics", "Moment (physics)" ]
175,004
https://en.wikipedia.org/wiki/Light-on-dark%20color%20scheme
A light-on-dark color scheme, better known as dark mode, dark theme or night mode, is a color scheme that uses light-colored text, icons, and graphical user interface elements on a dark background. It is often discussed in terms of computer user interface design and web design. Many modern websites and operating systems offer the user an optional light-on-dark display mode. Some users find dark mode displays more visually appealing, and claim that it can reduce eye strain. Displaying white at full brightness uses roughly six times as much power as pure black on a 2016 Google Pixel, which has an OLED display. However, conventional LED displays cannot benefit from reduced power consumption. Most modern operating systems support an optional light-on-dark color scheme. History Predecessors of modern computer screens, such as cathode-ray oscillographs, oscilloscopes, etc., tended to plot graphs and introduce other content as glowing traces on a black background. With the introduction of computer screens, originally user interfaces were formed on cathode-ray tubes (CRTs) like those used for oscillographs or oscilloscopes. The phosphor was normally a very dark color, and lit up brightly when the electron beam hit it, appearing to be white, green, blue, or amber on a black background, depending on phosphors applied on a monochrome screen. RGB screens continued to operate similarly, using all the beams set to "on" to form white. With the advent of teletext, research was done into which primary and secondary light colors and combinations worked best for this new medium. Cyan or yellow on black was typically found to be optimal from a palette of black, red, green, yellow, blue, magenta, cyan and white. The opposite color set, a dark-on-light color scheme, was originally introduced in WYSIWYG word processors to simulate ink on paper, and became the norm. Microsoft introduced a dark theme in the Anniversary Update of Windows 10 in 2016. In 2018, Apple followed in macOS Mojave. In September 2019, iOS 13 and Android 10 both introduced dark modes. Some operating systems provide tools to change the dark mode state automatically at sundown or sunrise. Firefox and Chromium have optional dark theme for all internal screens. It will also be possible for third-party developers to implement their own dark themes. There are also a variety of browser add-ons that can re-theme web sites with dark color schemes, also aligning with system theme. In 2019, a "prefers-color-scheme" option was created for front-end web developers, being a CSS property that signals a user's choice for their system to use a light or dark color theme. In July 2024, Wikipedia's mobile website received a dark mode. The desktop website later received a dark mode as well. Energy usage Light on dark color schemes require less energy to display on OLED displays. This positively impacts battery life and reduces energy consumption. While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, it can use more than three times as much power to display an image with a white background, such as a document or web site. This can lead to reduced battery life and higher energy usage unless a light-on-dark color scheme is used. The long-term reduced power usage may also prolong battery life or the useful life of the display and battery. The energy savings that can be achieved using a light-on-dark color scheme are because of how OLED screens work: in an OLED screen, each subpixel generates its own light and it only consumes power when generating light. This is in contrast to how an LCD works: in an LCD, subpixels either block or allow light from an always-on (lit) LED backlight to pass through. "AMOLED Black" color schemes (that use pure black instead of dark gray) do not necessarily save more energy than other light-on-dark color schemes that use dark gray instead of black, as the power consumption on an AMOLED screen decreases proportionately to the average brightness of the displayed pixels. Although it is true that AMOLED black does save more energy than dark gray, the additional energy savings are often negligible; AMOLED black will only give an additional energy saving of less than 1%, for instance, over the dark gray that's used in the dark theme for Google's official Android apps. In November 2018, Google confirmed that dark mode on Android saved battery life. Issues with the web Some argue that a color scheme with light text on a dark background is easier to read on the screen, because the lower overall brightness causes less eyestrain. Others argue to the contrary. Some pages on the web are designed for white backgrounds; Image assets (GIF, PNG, SVG, WOFF, etc) can be used improperly causing visual artifacts if dark mode is forced (instead of designed for) with a plugin like Dark Reader. There is a prefers-color-scheme media feature on CSS, to detect if the user has requested light or dark color scheme and serve the requested color scheme. It can be indicated from the user's operating system preference or a user agent. CSS example: @media (prefers-color-scheme: dark) { body { color: #ccc; background: #222; } } <span style="background-color: light-dark(#fff, #333); color: light-dark(#333, #fff);"></span> JavaScript example: if (window.matchMedia('(prefers-color-scheme: dark)').matches) { dark(); } See also AMOLED Blackle Night Shift (software) OLED Solarised (color scheme) References User interfaces Display technology Color schemes Computer graphics
Light-on-dark color scheme
[ "Technology", "Engineering" ]
1,234
[ "User interfaces", "Electronic engineering", "Interfaces", "Display technology" ]
175,039
https://en.wikipedia.org/wiki/Czochralski%20method
The Czochralski method, also Czochralski technique or Czochralski process, is a method of crystal growth used to obtain single crystals of semiconductors (e.g. silicon, germanium and gallium arsenide), metals (e.g. palladium, platinum, silver, gold), salts and synthetic gemstones. The method is named after Polish scientist Jan Czochralski, who invented the method in 1915 while investigating the crystallization rates of metals. He made this discovery by accident: instead of dipping his pen into his inkwell, he dipped it in molten tin, and drew a tin filament, which later proved to be a single crystal. The method is still used in over 90 percent of all electronics in the world that use semiconductors. The most important application may be the growth of large cylindrical ingots, or boules, of single crystal silicon used in the electronics industry to make semiconductor devices like integrated circuits. Other semiconductors, such as gallium arsenide, can also be grown by this method, although lower defect densities in this case can be obtained using variants of the Bridgman–Stockbarger method. The method is not limited to production of metal or metalloid crystals. For example, it is used to manufacture very high-purity crystals of salts, including material with controlled isotopic composition, for use in particle physics experiments, with tight controls (part per billion measurements) on confounding metal ions and water absorbed during manufacture. Application Monocrystalline silicon (mono-Si) grown by the Czochralski method is often referred to as monocrystalline Czochralski silicon (Cz-Si). It is the basic material in the production of integrated circuits used in computers, TVs, mobile phones and all types of electronic equipment and semiconductor devices. Monocrystalline silicon is also used in large quantities by the photovoltaic industry for the production of conventional mono-Si solar cells. The almost perfect crystal structure yields the highest light-to-electricity conversion efficiency for silicon. Production of Czochralski silicon High-purity, semiconductor-grade silicon (only a few parts per million of impurities) is melted in a crucible at , usually made of quartz. Dopant impurity atoms such as boron or phosphorus can be added to the molten silicon in precise amounts to dope the silicon, thus changing it into p-type or n-type silicon, with different electronic properties. A precisely oriented rod-mounted seed crystal is dipped into the molten silicon. The seed crystal's rod is slowly pulled upwards and rotated simultaneously. By precisely controlling the temperature gradients, rate of pulling and speed of rotation, it is possible to extract a large, single-crystal, cylindrical ingot from the melt. Occurrence of unwanted instabilities in the melt can be avoided by investigating and visualizing the temperature and velocity fields during the crystal growth process. This process is normally performed in an inert atmosphere, such as argon, in an inert chamber, such as quartz. Crystal sizes Due to efficiencies of scale, the semiconductor industry often uses wafers with standardized dimensions, or common wafer specifications. Early on, boules were small, a few centimeters wide. With advanced technology, high-end device manufacturers use 200 mm and 300 mm diameter wafers. Width is controlled by precise control of temperature, speeds of rotation, and the speed at which the seed holder is withdrawn. The crystal ingots from which wafers are sliced can be up to 2 metres in length, weighing several hundred kilograms. Larger wafers allow improvements in manufacturing efficiency, as more chips can be fabricated on each wafer, with lower relative loss, so there has been a steady drive to increase silicon wafer sizes. The next step up, 450 mm, was scheduled for introduction in 2018. Silicon wafers are typically about 0.2–0.75 mm thick, and can be polished to great flatness for making integrated circuits or textured for making solar cells. Incorporating impurities When silicon is grown by the Czochralski method, the melt is contained in a silica (quartz) crucible. During growth, the walls of the crucible dissolve into the melt and Czochralski silicon therefore contains oxygen at a typical concentration of 10 cm. Oxygen impurities can have beneficial or detrimental effects. Carefully chosen annealing conditions can give rise to the formation of oxygen precipitates. These have the effect of trapping unwanted transition metal impurities in a process known as gettering, improving the purity of surrounding silicon. However, formation of oxygen precipitates at unintended locations can also destroy electrical structures. Additionally, oxygen impurities can improve the mechanical strength of silicon wafers by immobilising any dislocations which may be introduced during device processing. It was experimentally shown in the 1990s that the high oxygen concentration is also beneficial for the radiation hardness of silicon particle detectors used in harsh radiation environment (such as CERN's LHC/HL-LHC projects). Therefore, radiation detectors made of Czochralski- and magnetic Czochralski-silicon are considered to be promising candidates for many future high-energy physics experiments. It has also been shown that the presence of oxygen in silicon increases impurity trapping during post-implantation annealing processes. However, oxygen impurities can react with boron in an illuminated environment, such as that experienced by solar cells. This results in the formation of an electrically active boron–oxygen complex that detracts from cell performance. Module output drops by approximately 3% during the first few hours of light exposure. Mathematical form Impurity concentration in the final solid is given by where and are (respectively) the initial and final concentration, and the initial and final volume, and the segregation coefficient associated with impurities at the melting phase transition. This follows from the fact that impurities are removed from the melt when an infinitesimal volume freezes. See also Float-zone silicon References External links Czochralski doping process Industrial processes Semiconductor growth Crystals Science and technology in Poland Polish inventions Methods of crystal growth
Czochralski method
[ "Chemistry", "Materials_science" ]
1,275
[ "Crystallography", "Crystals", "Methods of crystal growth" ]
175,075
https://en.wikipedia.org/wiki/Yuga
A yuga, in Hinduism, is generally used to indicate an age of time. In the Rigveda, a yuga refers to generations, a period of time (whether long or short), or a yoke (joining of two things). In the Mahabharata, the words yuga and kalpa (a day of Brahma) are used interchangeably to describe the cycle of creation and destruction. In post-Vedic texts, the words "yuga" and "age" commonly denote a (pronounced chatur yuga), a cycle of four world ages—for example, in the Surya Siddhanta and Bhagavad Gita (part of the Mahabharata)—unless expressly limited by the name of one of its minor ages: Krita (Satya) Yuga, Treta Yuga, Dvapara Yuga, or Kali Yuga. Etymology Yuga () means "a yoke" (joining of two things), "generations", or "a period of time" such as an age, where its archaic spelling is yug, with other forms of yugam, , and yuge, derived from yuj (), believed derived from (Proto-Indo-European: 'to join or unite'). Meanings The term "yuga" has multiple meanings, including representing the number 4 and various periods of time. In early Indian astronomy, it referred to a five-year cycle starting with the conjunction of the sun and moon in the autumnal equinox. More commonly, "yuga" is used in the context of kalpas, composed of four yugas. According to the Manusmriti, a kalpa starts with a Satya Yuga (4,000 years), followed by a Treta Yuga (3,000 years), a Dvapara Yuga (2,000 years), and ends with a Kali Yuga (1,000 years). According to Vishnu Purana, each Mahayuga comprises a Satya Yuga (1,728,000 human years), a Treta Yuga (1,296,000 years), a Dvapara Yuga (864,000 years), and a Kali Yuga (432,000 years). Virtues According to the Manusmriti, the virtue (dharma) of human beings varies across the four yugas (ages). The text states: In the Krita Yuga, the virtue is austerity (tapas); in the Treta Yuga, it is knowledge (jnana); in the Dvapara Yuga, it is sacrifice (yajna); and in the Kali Yuga, it is charity (dāna). See also Hindu units of time Kalpa (day of Brahma) Manvantara (age of Manu) Pralaya (period of dissolution) Yuga Cycle (four yuga ages): Satya (Krita), Treta, Dvapara, and Kali List of numbers in Hindu scriptures Explanatory notes References External links Vedic Time System: Yuga Four Yugas Hindu astronomy Hindu philosophical concepts Time in Hinduism Units of time
Yuga
[ "Physics", "Mathematics" ]
666
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
175,119
https://en.wikipedia.org/wiki/Tire%20iron
A tire iron (also tire lever or tire spoon) is a specialized metal or plastic tool used in working with tires. Tire irons have not been in common use for automobile tires since the shift to the use of tubeless tires in the late 1950s. Bicycle tire irons are still in use for those tires which have a separate inner tube, and can have a hooked C-shape cut into one end of the iron so that it may be hooked on a bicycle spoke to hold it in place. Description and use Tire irons, which usually come in pairs or threes, are used to pry the edge of a tire away from the rim of the wheel it has been mounted on. After one iron has pried a portion of the tire from its wheel, it is held in position while a second iron is applied further along the tire to pry more of the tire away from the wheel. This allows enough of the tire to be separated so that the first iron can be removed, and used again on the far side of the other iron. Alternating in this way, a person can work all the way around the tire to fully remove it from the wheel, in order to reach the tube that sits inside. In the first half of the 20th century, they became a colloquial term of strength, as in "I couldn't get rid of him with a pair of tire irons," and frequently appeared in cartoons in similar situations. The usage is now considered passé. Bicycle tire irons Tire irons for bicycles are usually referred to as "tire levers", as they are often made of plastic, not metal. Tire levers for bicycle tires have one end that is tapered and slightly curved. The other end is usually hooked so that it can be hooked around a spoke to keep the tire bead free of the rim at one point, allowing a second lever to be manipulated forward, progressively loosening a larger segment of the tire bead from the rim. A common feature of tire levers is the lack of sharp edges. The slightest pinch of an inner tube by a lever can weaken or puncture the tube. It is good practice to examine a set of tire levers for any sharp edges and to file them smooth and round. Another problem, though less critical, is that a steel lever would scratch aluminum rims. Classically, tire levers were made of metal. However plastic ones are now manufactured which are even less sharp and less likely to puncture the tube. There are also some single-lever varieties, which can be inserted under the bead at one point then quickly pushed around the rim to pop the bead off. Tire levers are not necessary or desirable in all cases. In some cases, the tire can be reinserted on the rim, and sometimes removed from the rim, without the use of tire levers. This reduces the chance of puncture caused by pinching the tube between the rim and the tire bead. Sometimes they are used to fit the tire back on, but this can be done without the levers. See also Bead breaker Crowbar Lug wrench References External links Bicycle tools Tires Mechanical hand tools
Tire iron
[ "Physics" ]
644
[ "Mechanics", "Mechanical hand tools" ]
175,142
https://en.wikipedia.org/wiki/Cremation
Cremation is a method of final disposition of a dead body through burning. Cremation may serve as a funeral or post-funeral rite and as an alternative to burial. In some countries, including India, Nepal, and Syria, cremation on an open-air pyre is an ancient tradition. Starting in the 19th century, cremation was introduced or reintroduced into other parts of the world. In modern times, cremation is commonly carried out with a closed furnace (cremator), at a crematorium. Cremation leaves behind an average of of remains known as ashes or cremains. This is not all ash but includes unburnt fragments of bone mineral, which are commonly ground into powder. They are inorganic and inert, and thus do not constitute a health risk and may be buried, interred in a memorial site, retained by relatives or scattered in various ways. History Ancient Cremation dates from at least 17,000 years ago in the archaeological record, with the Mungo Lady, the remains of a partly cremated body found at Lake Mungo, Australia. Alternative death rituals which emphasize one method of disposal – burial, cremation, or exposure – have gone through periods of preference throughout history. In the Middle East and Europe, both burial and cremation are evident in the archaeological record in the Neolithic era. Cultural groups had their own preferences and prohibitions. The ancient Egyptians developed an intricate transmigration-of-soul theology, which prohibited cremation. This was also widely adopted by Semitic peoples. The Babylonians, according to Herodotus, embalmed their dead. Phoenicians practiced both cremation and burial. From the Cycladic civilization in 3000 BCE until the Sub-Mycenaean era in 1200–1100 BCE, Greeks practiced burial. Cremation appeared around the 12th century BCE, probably influenced by Anatolia. Until the Christian era, when inhumation again became the only burial practice, both combustion and inhumation had been practiced, depending on the era and location. In Rome's earliest history, both inhumation and cremation were in common use among all classes. Around the mid-Republic, inhumation was almost exclusively replaced by cremation, with some notable exceptions, and remained the most common funerary practice until the middle of the Empire, when it was almost entirely replaced by inhumation. In Europe, there are traces of cremation dating to the Early Bronze Age (c. 2000 BCE) in the Pannonian Plain and along the middle Danube. The custom became dominant throughout Bronze Age Europe with the Urnfield culture (from c. 1300 BCE). In the Iron Age, inhumation again becomes more common, but cremation persisted in the Villanovan culture and elsewhere. Homer's account of Patroclus' burial describes cremation with subsequent burial in a tumulus, similar to Urnfield burials, and qualifying as the earliest description of cremation rites. This may be an anachronism, as during Mycenaean times burial was generally preferred, and Homer may have been reflecting the more common use of cremation at the time the Iliad was written, centuries later. Criticism of burial rites is a common aspersion by competing religions and cultures, including the association of cremation with fire sacrifice or human sacrifice. Hinduism and Jainism are notable for not only allowing but prescribing cremation. Cremation in India is first attested in the Cemetery H culture (from c. 1900 BCE), considered the last phase of Indus Valley Civilisation and beginning of the Vedic civilization. The Rigveda contains a reference to the emerging practice, in RV 10.15.14, where the forefathers "both cremated (agnidagdhá-) and uncremated (ánagnidagdha-)" are invoked. Cremation remained common but not universal, in both ancient Greece and ancient Rome. According to Cicero, burial was considered the more archaic rite in Rome. The rise of Christianity saw an end to cremation in Europe, though it may have already been in decline. In early Roman Britain, cremation was usual but diminished by the 4th century. It then reappeared in the 5th and 6th centuries during the migration era, when sacrificed animals were sometimes included on the pyre, and the dead were dressed in costume and with ornaments for the burning. That custom was also very widespread among the Germanic peoples of the northern continental lands from which the Anglo-Saxon migrants are supposed to have been derived, during the same period. These ashes were usually thereafter deposited in a vessel of clay or bronze in an "urn cemetery". The custom again died out with the Christian conversion of the Anglo-Saxons or Early English during the 7th century, when Christian burial became general. Middle Ages In parts of Europe, cremation was forbidden by law, and even punishable by death if combined with Heathen rites. Cremation was sometimes used by Catholic authorities as part of punishment for accused heretics, which included burning at the stake. For example, the body of John Wycliff was exhumed years after his death and burned to ashes, with the ashes thrown in a river, explicitly as a posthumous punishment for his denial of the Roman Catholic doctrine of transubstantiation. The first to advocate for the use of cremation was the physician Sir Thomas Browne in Urne Buriall (1658) which interpreted cremation as means of oblivion and reveals plainly that "'there is no antidote against the Opium of time...". Honoretta Brooks Pratt became the first recorded cremated European individual in modern times when she died on 26 September 1769 and was illegally cremated at the burial ground on Hanover Square in London. Reintroduction In Europe, a movement to reintroduce cremation as a viable method for body disposal began in the 1870s. This was made possible by the invention of new furnace technology and contact with eastern cultures that practiced it. At the time, many proponents believed in the miasma theory, and that cremation would reduce the "bad air" that caused diseases. These movements were associated with secularism and gained a following in cultural and intellectual circles. In Italy, the movement was associated with anti-clericalism and Freemasonry, whereas these were not major themes of the movement in Britain. In 1869, the idea was presented to the Medical International Congress of Florence by Professors Coletti and Castiglioni "in the name of public health and civilization". In 1873, Professor Paolo Gorini of Lodi and Professor Ludovico Brunetti of Padua published reports of practical work they had conducted. A model of Brunetti's cremating apparatus, together with the resulting ashes, was exhibited at the Vienna Exposition in 1873 and attracted great attention Meanwhile, Sir Charles William Siemens had developed his regenerative furnace in the 1850s. His furnace operated at a high temperature by using regenerative preheating of fuel and air for combustion. In regenerative preheating, the exhaust gases from the furnace are pumped into a chamber containing bricks, where heat is transferred from the gases to the bricks. The flow of the furnace is then reversed so that fuel and air pass through the chamber and are heated by the bricks. Through this method, an open-hearth furnace can reach temperatures high enough to melt steel, and this process made cremation an efficient and practical proposal. Charles's nephew, Carl Friedrich von Siemens perfected the use of this furnace for the incineration of organic material at his factory in Dresden. The radical politician, Sir Charles Wentworth Dilke, took the corpse of his dead wife there to be cremated in 1874. The efficient and cheap process brought about the quick and complete incineration of the body and was a fundamental technical breakthrough that finally made industrial cremation a practical possibility. The first crematorium in the Western World opened in Milan in 1876. Milan's "Crematorium Temple" was built in the Monumental Cemetery. The building still stands but ceased to be operational in 1992. Sir Henry Thompson, 1st Baronet, a surgeon and Physician to the Queen Victoria, had seen Gorini's cremator at the Vienna Exhibition and had returned home to become the first and chief promoter of cremation in England. His main reason for supporting cremation was that "it was becoming a necessary sanitary precaution against the propagation of disease among a population daily growing larger in relation to the area it occupied". In addition, he believed, cremation would prevent premature burial, reduce the expense of funerals, spare mourners the necessity of standing exposed to the weather during interment, and urns would be safe from vandalism. He joined with other proponents to form the Cremation Society of Great Britain in 1874." They founded the United Kingdom's first crematorium in Woking, with Gorini travelling to England to assist the installation of a cremator. They first tested it on 17 March 1879 with the body of a horse. After protests and an intervention by the Home Secretary, Sir Richard Cross, their plans were put on hold. In 1884, the Welsh Neo-Druidic priest William Price was arrested and put on trial for attempting to cremate his son's body. Price successfully argued in court that while the law did not state that cremation was legal, it also did not state that it was illegal. The case set a precedent that allowed the Cremation Society to proceed. In 1885, the first official cremation in the United Kingdom took place in Woking. The deceased was Jeanette Pickersgill, a well-known figure in literary and scientific circles. By the end of the year, the Cremation Society of Great Britain had overseen 2 more cremations, a total of 3 out of 597,357 deaths in the UK that year. In 1888, 28 cremations took place at the venue. In 1891, Woking Crematorium added a chapel, pioneering the concept of a crematorium being a venue for funerals as well as cremation. Other early crematoria in Europe were built in 1878 in the town of Gotha in Germany and later in Heidelberg in 1891. The first modern crematory in the U.S. was built in 1876 by Francis Julius LeMoyne after hearing about its use in Europe. Like many early proponents, he was motivated by a belief it would be beneficial for public health. Before LeMoyne's crematory closed in 1901, it had performed 42 cremations. Other countries that opened their first crematorium included Sweden (1887 in Stockholm), Switzerland (1889 in Zurich) and France (1889 in Père Lachaise, Paris). Western spread Some of the various Protestant churches came to accept cremation. In Anglican and Nordic Protestant countries, cremation gained acceptance (though it did not yet become the norm) first by the upper classes and cultural circles, and then by the rest of the population. In 1905, Westminster Abbey interred ashes for the first time; by 1911 the Abbey was expressing a preference for interring ashes. The 1908 Catholic Encyclopedia was critical of the development, referring to them as a "sinister movement" and associating them with Freemasonry, although it said that "there is nothing directly opposed to any dogma of the Church in the practice of cremation." In the U.S. only about one crematory per year was built in the late 19th century. As embalming became more widely accepted and used, crematories lost their sanitary edge. Not to be left behind, crematories had an idea of making cremation beautiful. They started building crematories with stained-glass windows and marble floors with frescoed walls. Australia also started to establish modern cremation movements and societies. Australians had their first purpose-built modern crematorium and chapel in the West Terrace Cemetery in the South Australian capital of Adelaide in 1901. This small building, resembling the buildings at Woking, remained largely unchanged from its 19th-century style and was in full operation until the late 1950s. The oldest operating crematorium in Australia is at Rookwood Cemetery, in Sydney. It opened in 1925. In the Netherlands, the foundation of the Association for Optional Cremation in 1874 ushered in a long debate about the merits and demerits of cremation. Laws against cremation were challenged and invalidated in 1915 (two years after the construction of the first crematorium in the Netherlands), though cremation did not become legally recognised until 1955. World War II During World War II (1939–45), Nazi Germany used specially built furnaces in at least six extermination camps throughout occupied Poland including at Auschwitz-Birkenau, Chełmno, Belzec, Majdanek, Sobibor and Treblinka, where the bodies of those murdered by gassing were disposed of using incineration. The efficiency of industrialised killing of Operation Reinhard during the most deadly phase of the Holocaust produced too many corpses, therefore the crematoria manufactured to SS specifications were put into use in all of them to handle the disposals around the clock, day and night. The Vrba–Wetzler report offers the following description. The Holocaust furnaces were supplied by a number of manufacturers, with the best known and most common being Topf and Sons as well as Kori Company of Berlin, whose ovens were elongated to accommodate two bodies, slid inside from the back side. The ashes were taken out from the front side. Modern era In the 20th century, cremation gained varying degrees of acceptance in most Christian denominations. William Temple, the most senior bishop in the Church of England, was cremated after his death in office in 1944. The Roman Catholic Church accepted the practice more slowly. In 1963, at the Second Vatican Council Pope Paul VI lifted the ban on cremation, and in 1966 allowed Catholic priests to officiate at cremation ceremonies. This is done on the condition that the ashes must be buried or interred, not scattered. Many countries where burial is traditional saw cremation rise to become a significant, if not the most common way of disposing of a dead body. In the 1960s and 1970s, there was an unprecedented phase of crematorium construction in the United Kingdom and the Netherlands. Starting in the 1960s, cremation has become more common than burial in several countries where the latter is traditional. This has included the United Kingdom (1968), Czechoslovakia (1980), Canada (early 2000s), the United States (2016) and Finland (2017). Factors cited include cheaper costs (especially a factor after the 2008 recession), growth in secular attitudes and declining opposition in some Christian denominations. Modern process The cremation occurs in a cremator, which is located at a crematorium or crematory. In many countries, the crematorium is a venue for funerals as well as cremation. A cremator is an industrial furnace that is able to generate temperatures of to ensure the disintegration of the corpse. Modern cremator fuels include oil, natural gas, propane, and, in Hong Kong, coal gas. Modern cremators automatically monitor their interior to tell when the cremation process is complete and have a spyhole so that an operator can see inside. The time required for cremation varies from body to body, with the average being 90 minutes for an adult body. The chamber where the body is placed is called a cremation chamber or retort and is lined with heat-resistant refractory bricks. Refractory bricks are designed in several layers. The outermost layer is usually simply an insulation material, e.g., mineral wool. Inside is typically a layer of insulation brick, mostly calcium silicate in nature. Heavy duty cremators are usually designed with two layers of fire bricks inside the insulation layer. The layer of fire bricks in contact with the combustion process protects the outer layer and must be replaced from time to time. The body is generally required to be inside a coffin or a combustible container. This allows the body to be quickly and safely slid into the cremator. It also reduces health risks to the operators. The coffin or container is inserted (charged) into the cremator as quickly as possible to avoid heat loss. Some crematoria allow relatives to view the charging. This is sometimes done for religious reasons, such as in traditional Hindu and Jain funerals, and is also customary in Japan. Body container In the United States federal law does not dictate any container requirements for cremation. Certain states require an opaque or non-transparent container for all cremations. This can be a simple corrugated cardboard box or a wooden casket (coffin). Another option is a cardboard box that fits inside a wooden shell, which is designed to look like a traditional casket. After the funeral service, the box is removed from the shell before cremation, permitting the shell to be re-used. In the United Kingdom, the body is not removed from the coffin and is not placed into a container as described above. The body is cremated with the coffin which is why all British coffins that are to be used for cremation must be combustible. The Code of Cremation Practice forbids the opening of the coffin once it has arrived at the crematorium, and rules stipulate that it must be cremated within 72 hours of the funeral service. Therefore, in the United Kingdom, bodies are cremated in the same coffin that they are placed in at the undertaker's, although the regulations allow the use of an approved "cover" during the funeral service. It is recommended that jewellery be removed before the coffin is sealed, for this reason. When cremation is finished, the remains are passed through a magnetic field to remove any metal, which will be interred elsewhere in the crematorium grounds or, increasingly, recycled. The ashes are entered into a cremulator to further grind the remains down into a finer texture before being given to relatives or loved ones or scattered in the crematorium grounds where facilities exist. In Germany, the process is mostly similar to that of the United Kingdom. The body is cremated in the coffin. A piece of fire clay with a number on it is used for identifying the remains of the dead body after burning. The remains are then placed in a container called an ash capsule, which generally is put into a cinerary urn. In Australia, reusable or cardboard coffins are rare, with only a few manufacturers now supplying them. For low cost, a plain, particle-board coffin (known in the trade as a "chippie", "shipper" or "pyro") can be used. Handles (if fitted) are plastic and approved for use in a cremator. Cremations can be "delivery only", with no preceding chapel service at the crematorium (although a church service may have been held) or preceded by a service in one of the crematorium chapels. Delivery-only allows crematoria to schedule cremations to make best use of the cremators, perhaps by holding the body overnight in a refrigerator, allowing a lower fee to be charged. Burning and ash collection The box containing the body is placed in the retort and incinerated at a temperature of . During the cremation process, the greater portion of the body (especially the organs and other soft tissues) is vaporized and oxidized by the intense heat; gases released are discharged through the exhaust system. Jewelry, such as necklaces, wrist-watches and rings, are ordinarily removed before cremation, and returned to the family. Several implanted devices are required to be removed. Pacemakers and other medical devices can cause large, dangerous explosions. Contrary to popular belief, the cremated remains are not ashes in the usual sense. After the incineration is completed, the dry bone fragments are swept out of the retort and pulverised by a machine called a Cremulator—essentially a high-capacity, high-speed Pulverizer —to process them into "ashes" or "cremated remains", although pulverisation may also be performed by hand. This leaves the bone with a fine sand like texture and color, able to be scattered without need for mixing with any foreign matter, though the size of the grain varies depending on the Cremulator used. The mean weight of an adult's remains is ; the mean weight for adult males is about higher than that for adult females. There are various types of Cremulators, including rotating devices, grinders, and older models using heavy metal balls. The grinding process typically takes about 20 seconds. In most Asian countries, the bones are not pulverised, unless requested beforehand. When not pulverised, the bones are collected by the family and stored as one might do with ashes. The appearance of cremated remains after grinding is one of the reasons they are called ashes, although a non-technical term sometimes used is "cremains", a portmanteau of "cremated" and "remains". (The Cremation Association of North America prefers that the word "cremains" not be used for referring to "human cremated remains". The reason given is that "cremains" is thought to have less connection with the deceased, whereas a loved one's "cremated remains" has a more identifiable human connection.) After final grinding, the ashes are placed in a container, which can be anything from a simple cardboard box to a decorative urn. The default container used by most crematoria, when nothing more expensive has been selected, is usually a hinged, snap-locking plastic box. Ash weight and composition Cremated remains are mostly dry calcium phosphates with some minor minerals, such as salts of sodium and potassium. Sulfur and most carbon are driven off as oxidized gases during the process, although about 1–4% of carbon remains as carbonate. The ash remaining represents very roughly 3.5% of the body's original mass (2.5% in children). Because the weight of dry bone fragments is so closely connected to skeletal mass, their weight varies greatly from person to person. Because many changes in body composition (such as fat and muscle loss or gain) do not affect the weight of cremated remains, the weight of the remains can be more closely predicted from the person's height and sex (which predicts skeletal weight), than it can be predicted from the person's simple weight. Ashes of adults can be said to weigh from , with women's ashes generally weighing below and men's ashes generally weighing above . Bones are not all that remain after cremation. There may be melted metal lumps from missed jewellery; casket furniture; dental fillings; and surgical implants, such as hip replacements. Breast implants do not have to be removed before cremation. Some medical devices such as pacemakers may need to be removed before cremation to avoid the risk of explosion. Large items such as titanium hip replacements (which tarnish but do not melt) or casket hinges are usually removed before processing, as they may damage the processor. (If they are missed at first, they must ultimately be removed before processing is complete, as items such as titanium joint replacements are far too durable to be ground.) Implants may be returned to the family, but are more commonly sold as ferrous/non-ferrous scrap metal. After the remains are processed, smaller bits of metal such as tooth fillings, and rings (commonly known as gleanings) are sieved out and may be later interred in common, consecrated ground in a remote area of the cemetery. They may also be sold as precious metal scrap. Retention or disposal of remains Cremated remains are returned to the next of kin in different manners according to custom and country. In the United States, the cremated remains are almost always contained in a thick watertight polyethylene plastic bag contained within a hard snap-top rectangular plastic container, which is labeled with a printed paper label. The basic sealed plastic container bag may be contained within a further cardboard box or velvet sack, or they may be contained within an urn if the family had already purchased one. An official certificate of cremation prepared under the authority of the crematorium accompanies the remains, and if required by law, the permit for disposition of human remains, which must remain with the cremated remains. Cremated remains can be kept in an urn, stored in a special memorial building (columbarium), buried in the ground at many locations or sprinkled on a special field, mountain, or in the sea. In addition, there are several services in which the cremated remains will be scattered in a variety of ways and locations. Some examples are via a helium balloon, through fireworks, shot from shotgun shells, by boat or scattered from an aeroplane or drone. One service sends a lipstick-tube sized sample of the cremated remains into low earth orbit, where they remain for years (but not permanently) before reentering the atmosphere. Some companies offer a service to turn part of the cremated remains into synthetic diamonds which can then be made into jewelry. This "cremation jewelry" is also known as funeral jewelry, remembrance jewelry or memorial jewelry. A portion of the cremated remains may be retained in a specially designed locket known as cremation jewelry, or even blown into special glass keepsakes and glass orbs Cremated remains may also be incorporated, with urn and cement, into part of an artificial reef, or they can also be mixed into paint and made into a portrait of the deceased. Some individuals use a very small amount of the remains in tattoo ink, for remembrance portraits. Cremated remains can be scattered in national parks in the United States with a special permit. They can also be scattered on private property with the permission of the owner. The cremated remains may also be entombed. Most cemeteries will grant permission for burial of cremated remains in occupied cemetery plots that have already been purchased or are in use by the families disposing of the cremated remains without any additional charge or oversight. Ashes are alkaline. In some areas such as Snowdon, Wales, environmental authorities have warned that the frequent scattering of ashes can change the nature of the soil, and may affect the ecology. The final disposition depends on the personal preferences of the deceased as well as their cultural and religious beliefs. Some religions will permit the cremated remains to be sprinkled or retained at home. Some religions, such as Roman Catholicism, prefer to either bury or entomb the remains. Hinduism obliges the closest male relative (son, grandson, etc.) of the deceased to immerse the cremated remains in the holy river Ganges, preferably at one of the holy cities Triveni Sangam, Allahabad, Varanasi, or Haridwar in India. The Sikhs immerse the remains in the Sutlej, usually at Kiratpur Sahib. In southern India, the ashes are immersed in the river Kaveri at Paschima vahini in Srirangapattana at a stretch where the river flows from east to west, depicting the life of a human being from sunrise to sunset. In Japan and Taiwan, the remaining bone fragments are given to the family and are used in a burial ritual before final interment. Reasons Aside from religious reasons (discussed below), some people find they prefer cremation over traditional burial for personal reasons. The thought of a long and slow decomposition process is unappealing to some; many people find that they prefer cremation because it disposes of the body relatively quickly. Other people view cremation as a way of simplifying their funeral process. These people view a traditional ground burial as an unneeded complication of their funeral process, and thus choose cremation to make their services as simple as possible. Cremation is a more simple disposition method to plan than a burial funeral. This is because with a burial funeral one would have to plan for more transportation services for the body as well as embalming and other body preservation methods. With a burial funeral one will also have to purchase a casket, headstone, grave plot, opening and closing of the grave fee, and mortician fees. Cremation funerals only require planning the transportation of the body to a crematorium, cremation of the body, and a cremation urn. The cost factor tends to make cremation attractive. Generally speaking, cremation is cheaper than a traditional burial service, especially if direct cremation (also known as bare cremation) is chosen, in which the body is cremated as soon as legally possible without any sort of services. For some, even cremation is still relatively expensive, especially as a lot of fuel is required to perform it. Methods to reduce fuel consumption/fuel cost include the use of different fuels (i.e. natural gas or propane, compared to wood) and by using an incinerator (retort) (closed cabin) rather than an open fire. For surviving kin, cremation is preferred because of simple portability. Survivors relocating to another city or country have the option of transporting the remains of their loved ones with the ultimate goal of being interred or scattered together. Environmental impact Despite being an obvious source of carbon emissions, cremation does have environmental advantages over burial, depending on local practice. Studies by Elisabeth Keijzer for the Netherlands Organisation for Applied Research found that cremation has less of an environmental impact than a traditional burial (the study did not address natural burials), while the newer method of alkaline hydrolysis (sometimes called green cremation or resomation) had less impact than both. The study was based on Dutch practice; American crematoria are more likely to emit mercury, but are less likely to burn hardwood coffins. Keijzer's studies also found that a cremation or burial accounts for only about a quarter of a funeral's environmental impact; the carbon emissions of people travelling to the funeral are far greater. Each cremation requires about of fuel and releases about of carbon dioxide into the atmosphere. Thus, the roughly 1 million bodies that are cremated annually in the United States produce about of carbon dioxide, which is more CO2 pollution than 22,000 average American homes generate in a year. The environmental impact may be reduced by using cremators for longer periods, and relaxing the requirement for a cremation to take place on the same day that the coffin is received, which reduces the use of fossil fuel and hence carbon emissions. Cremation is therefore becoming more friendly toward the environment. Some funeral and crematorium owners offer a carbon neutral funeral service incorporating efficient-burning coffins made from lightweight recycled composite board. Burial is a known source of certain environmental contaminants, with the major ones being formaldehyde and the coffin itself. Cremation can also release contaminants, such as mercury from dental fillings. In some countries such as the United Kingdom, the law now requires that cremators be fitted with abatement equipment (filters) that remove serious pollutants such as mercury. Another environmental concern is that traditional burial takes up a great deal of space. In a traditional burial, the body is buried in a casket made from a variety of materials. In the United States, the casket is often placed inside a concrete vault or liner before burial in the ground. While individually this may not take much room, combined with other burials, it can over time cause serious space concerns. Many cemeteries, particularly in Japan and Europe as well as those in larger cities, have run out of permanent space. In Tokyo, for example, traditional burial plots are extremely scarce and expensive, and in London, a space crisis led Harriet Harman to propose reopening old graves for "double-decker" burials. Some cities in Germany do not have plots for sale, only for lease. When the lease expires, the remains are disinterred and a specialist bundles the bones, inscribes the forehead of the skull with the information that was on the headstone, and places the remains in a special crypt. In Singapore, cremation is preferred by most Singaporeans because burial there is limited to 15 years. Religious views Christianity In Christian countries and cultures, cremation has historically been discouraged and viewed as a desecration of God's image, and as interference with the resurrection of the dead taught in Scripture. It is now acceptable to some denominations, since a literal interpretation of Scripture is less common in modern reformist traditions. Catholicism Christians preferred to bury the dead rather than to cremate the remains, as was common in Roman culture. The early church carried on Judaism's respect for the human body as being created in God's image, and followed their practices of speedy interment, in hopes of the future resurrection of all dead. The Roman catacombs and Medieval veneration of relics of Roman Catholic saints witness to this preference. For them, the body was not a mere receptacle for a spirit that was the real person, but an integral part of the human person. They looked on the body as sanctified by the sacraments and itself the temple of the Holy Spirit, and thus requiring to be disposed of in a way that honours and reveres it, and they saw many early practices involved with disposal of dead bodies as pagan in origin or an insult to the body. The idea that cremation might interfere with God's ability to resurrect the body was refuted by the 2nd-century Octavius of Minucius Felix, in which he said: "Every body, whether it is dried up into dust, or is dissolved into moisture, or is compressed into ashes, or is attenuated into smoke, is withdrawn from us, but it is reserved for God in the custody of the elements. Nor, as you believe, do we fear any loss from sepulture, but we adopt the ancient and better custom of burying in the earth." A similar practice of boiling to remove flesh from bones was also punished with excommunication in a 1300 decree of Pope Boniface VIII. And while there was a clear and prevailing preference for burial, there was no general Church law forbidding cremation until 1866. In Medieval Europe, cremation was practiced mainly in situations where there were multitudes of corpses simultaneously present, such as after a battle, after a pestilence or famine, and where there was an imminent fear of diseases spreading from the corpses, since individual burials with digging graves would take too long and body decomposition would begin before all the corpses had been interred. Beginning in the Middle Ages, and even more so in the 18th century and later, non-Christian rationalists and classicists began to advocate cremation again as a statement denying the resurrection and/or the afterlife, although the pro-cremation movement often took care to address these concerns. Sentiment within the Catholic Church against cremation became hardened in the face of the association of cremation with "professed enemies of God." When Masonic groups advocated cremation as a means of rejecting Christian belief in the resurrection, the Holy See forbade Catholics to practise cremation in 1886. The 1917 Code of Canon Law incorporated this ban. In 1963, recognizing that, in general, cremation was being sought for practical purposes and not as a denial of bodily resurrection, the choice of cremation was permitted in some circumstances. The current 1983 Code of Canon Law, states: "The Church earnestly recommends the pious custom of Christian burial be retained; but it does not entirely forbid cremation, except if this is chosen for reasons which are contrary to Christian teaching." There are no universal rules governing Catholic funeral rites in connection with cremation, but episcopal conferences have laid down rules for various countries. Of these, perhaps the most elaborate are those established, with the necessary confirmation of the Holy See, by the United States Conference of Catholic Bishops and published as Appendix II of the United States edition of the Order of Christian Funerals. Although the Holy See has in some cases authorized bishops to grant permission for funeral rites to be carried out in the presence of cremated remains, it is preferred that the rites be carried out in the presence of a still intact body. Practices that show insufficient respect for the ashes of the dead such as turning them into jewelry or scattering them are forbidden for Catholics, but burial on land or sea or enclosing in a niche or columbarium is now acceptable. Anglicanism and Lutheranism In 1917, Volume 6 of the American Lutheran Survey stated that "The Lutheran clergy as a rule refuse" and that "Episcopal pastors often take a stand against it." Indeed, in the 1870s, the Anglican Bishop of London stated that the practice of cremation would "undermine the faith of mankind in the doctrine of the resurrection of the body, hasten rejection of a Scriptural worldview and so bring about a most disastrous social revolution." In The Lutheran Pastor, George Henry Gerberding stated: Some Protestant churches welcomed the use of cremation at a much earlier date than the Catholic Church; pro-cremation sentiment was not unanimous among Protestants, as some have retained a literal interpretation of Scripture. The first crematoria in the Protestant countries were built in the 1870s, and in 1908, the Dean and Chapter of Westminster Abbey—one of the most famous Anglican churches—required that remains be cremated for burial in the abbey's precincts. Today, "scattering", or "strewing," is an acceptable practice in some Protestant denominations, and some churches have their own "garden of remembrance" on their grounds in which remains can be scattered. Some denominations, like Lutheran churches in Scandinavia, favour the urns being buried in family graves. A family grave can thus contain urns of many generations and also the urns of spouses and loved ones. Methodism An 1898 Methodist tract titled Immortality and Resurrection noted that "burial is the result of a belief in the resurrection of the body, while cremation anticipates its annihilation." The Methodist Review noted in 1874 that "Three thoughts alone would lead us to suppose that the early Christians would have special care for their dead, namely, the essential Jewish origin of the Church; the mode of burial of their founder; and the doctrine of the resurrection of the body, so powerfully urged by the apostles, and so mighty in its influence on the primitive Christians. From these considerations, the Roman custom of cremation would be most repulsive to the Christian mind." Since at least 1992, the United Methodist Church does not have a specific official statement that either endorses or condemns cremation, leaving the choice to individuals and families. Resources within the official ritual refer to the possible use of an urn and the interment of ashes. Eastern Orthodox and other opposition Some branches of Christianity entirely oppose cremation, including non-mainstream Protestant groups and the Orthodox churches. The Eastern Orthodox and Oriental Orthodox Churches forbid cremation. Exceptions are made for circumstances where it cannot be avoided (when civil authority demands it, in aftermath of war or during epidemics) or if it may be sought for good cause, such as the discovery of body already in the state of decomposition. But when a cremation is specifically and willfully chosen for no good cause by the one who is deceased, he or she is not permitted a funeral in the church and may also be permanently excluded from burial in a Christian cemetery and liturgical prayers for the departed. In Orthodoxy, cremation is perceived as a rejection of the temple of God and of the dogma of the general resurrection. Most independent Bible churches, free churches, Holiness churches and those of Anabaptist faiths will not practice cremation. As one example, the Church of God (Restoration) forbids the practice of cremation, believing as the Early Church did, that it continues to be a pagan practice. Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS Church) has, in past decades, discouraged cremation without expressly forbidding it. In the 1950s, for example, Apostle Bruce R. McConkie wrote that "only under the most extraordinary and unusual circumstances" would cremation be consistent with LDS teachings. More recent LDS publications have provided instructions for how to dress the deceased when they have received their temple endowments (and thus wear temple garments) prior to cremation for those wishing to do so, or in countries where the law requires cremation. Except where required by law, the family of the deceased may decide whether the body should be cremated, though the Church "does not normally encourage cremation." Hinduism Indian religions such as Hinduism, Buddhism, Jainism, and Sikhism practice cremation. The founder of Buddhism, the Buddha, was cremated. A dead adult Hindu is mourned with a cremation, while a dead child is typically buried. The rite of passage is performed in harmony with the Hindu religious view that the microcosm of all living beings is a reflection of a macrocosm of the universe. The soul (Atman, Brahman) is the essence and immortal that is released at the Antyesti ritual, but both the body and the universe are vehicles and transitory in various schools of Hinduism. They consist of five elements – air, water, fire, earth, and space. The last rite of passage returns the body to the five elements and origins. The roots of this belief are found in the Vedas, for example in the hymns of Rigveda in section 10.16, as follows: The final rite in the case of untimely death of a child is usually not cremation but a burial. This is rooted in Rigveda's section 10.18, where the hymns mourn the death of the child, praying to deity Mrityu to "neither harm our girls nor our boys", and pleads the earth to cover, protect the deceased child as a soft wool. Ashes of the cremated bodies are usually spread in rivers, which are considered holy in the Hindu practice. The Ganges is considered to be the holiest river, and Varanasi, situated on the banks of the river, is regarded as the most sacred site for cremation. Balinese Balinese Hindu dead are generally buried inside the container for a period of time, which may exceed one month or more, so that the cremation ceremony (Ngaben) can occur on an auspicious day in the Balinese-Javanese Calendar system ("Saka"). Additionally, if the departed was a court servant, member of the court or minor noble, the cremation can be postponed up to several years to coincide with the cremation of their Prince. Balinese funerals are very expensive and the body may be interred until the family can afford it or until there is a group funeral planned by the village or family when costs will be less. The purpose of burying the corpse is for the decay process to consume the fluids of the corpse, which allows for an easier, more rapid and more complete cremation. Islam Most Muslims believe Islam strictly forbids cremation. Its teaching is that cremation is not in line with the respect and dignity due to the deceased. They believe Islam has specific rites for the treatment of the body after death. Judaism The first reference to cremation in the Hebrew Bible is found in 1 Samuel 31. In this passage, the dead bodies of Saul and his sons are burned, and their bones are buried. Judaism has traditionally disapproved of cremation in the past, as a rejection of the respect due to humans who are created in the image of God. Judaism has also disapproved of preservation of the dead by means of embalming and mummifying, as this involves mutilation and abuse of the corpse. Mummification was a practice of the ancient Egyptians, among whom the Israelites are said in the Torah to have lived as slaves. Through history and up to the philosophical movements of the current era Modern Orthodox, Orthodox, Haredi, and Hasidic movements in Judaism have maintained the historical practice and strict Biblical line against cremation and disapprove of it, as Halakha (Jewish law) forbids it. This halakhic concern is grounded in the literal interpretation of Scripture, viewing the body as created in the image of God and upholding a bodily resurrection as core beliefs of traditional Judaism. This interpretation was occasionally opposed by some Jewish groups such as the Sadducees, who denied resurrection. The Tanakh emphasizes burial as the normal practice, for instance Devarim (Deuteronomy) 21:23 (specifically commanding the burial of executed criminals), with both a positive command derived from this verse to command one to bury a dead body and a negative command forbidding neglecting to bury a dead body. Some from the generally liberal Conservative Jewish also oppose cremation, some very strongly, seeing it as a rejection of God's design. During the 19th and early 20th centuries, as the Jewish cemeteries in many European towns had become crowded and were running out of space, in a few cases cremation for the first time became an approved means of corpse disposal among emerging liberal and Reform Jewish movements in line with their general rejection of literal scripture interpretation and traditional Torah ritual laws. Current liberal movements like Reform Judaism still permit cremation, although burial remains the preferred option. The Central Conference of American Rabbis has issued a responsa stating that families are permitted to choose cremation, but Reform rabbis are allowed to discourage the practice. However, Reform rabbis are instructed not to refuse to officiate at cremations. In Israel religious ritual events including free burial and funeral services for all who die in Israel and all citizens including the majority Jewish population including for the secular or non-observant are almost universally facilitated through the Rabbinate of Israel. This is an Orthodox organization following historical and traditional Jewish law. In Israel there were no formal crematories until 2004 when B&L Cremation Systems Inc. became the first crematory manufacturer to sell a retort to Israel. In August 2007, an orthodox youth group in Israel was accused of burning down the country's sole crematorium, which they see as an affront to God. The crematorium was rebuilt by its owner and the retort replaced. Baháʼí Faith The Baháʼí Faith forbids cremation. A letter written on behalf of Shoghi Effendi to a National Spiritual Assembly states, "He feels that, in view of what 'Abdu'l-Bahá has said against cremation, the believers should be strongly urged, as an act of faith, to make provisions against their remains being cremated. Bahá'u'lláh has laid down as a law, in the Aqdas, the manner of Baháʼí burial, and it is so beautiful, befitting and dignified, that no believer should deprive himself of it." Wicca Both burial and cremation are practiced by Wiccans and there is no set directive on how the body should be disposed of after death. Wiccans believe that the body is merely a shell for the spirit so cremation is not viewed as irreverent or disrespectful. One tradition practiced by Wiccans is to mix the ashes from cremation with soil which is then used to plant a tree. Zoroastrianism Traditionally, Zoroastrianism disavows cremation or burial to preclude pollution of fire or earth. The traditional method of corpse disposal is through ritual exposure in a "Tower of Silence", but both burial and cremation are increasingly popular alternatives. Some contemporary adherents of the faith have opted for cremation. Parsi-Zoroastrian singer Freddie Mercury of the group Queen was cremated after his death. Chinese Neo-Confucianism under Zhu Xi strongly discourages cremation of one's parents' corpses as unfilial. Han Chinese traditionally practiced burial and viewed cremation as taboo and as a barbarian practice. Traditionally, only Buddhist monks in China practiced cremation because ordinary Han Chinese detested cremation, refusing to do it. But now, the atheist Communist party enforces a strict cremation policy. Exceptions are made for Hui who do not cremate their dead due to Islamic beliefs. The minority Jurchen and their Manchu descendants originally practiced cremation as part of their culture. They adopted the practice of burial from the Han, but many Manchus continued to cremate their dead. Pets In Japan, more than 465 companion animal temples are in operation. These venues hold funerals and rituals for dead pets. In Australia, pet owners can purchase services to have their companion animal cremated and placed in a pet cemetery or taken home. The cost of pet cremation depends on location, where the cremation is done, and time of cremation. The American Humane Society's cost for the cremation of a pet weighing under 22.5 kg (50 lb) costs $110, while a pet weighing over 23 kg (51 lb) is $145. The cremated remains are available for the owner to pick up in seven to ten business days. Urns for the companion animal range from $50 to $150. Though pet cremation has accelerated in recent years, Americans are still burying their pets by a 2:1 ratio. Recent controversies Tri-State Crematory incident In early 2002, 334 corpses that were supposed to have been cremated in the previous few years at the Tri-State Crematory were found intact and decaying on the crematorium's grounds in the U.S. state of Georgia, having been dumped there by the crematorium's proprietor. Many of the corpses were decayed beyond identification. Some families received "ashes" that were made of wood and concrete dust. Operator Ray Brent Marsh had 787 criminal charges filed against him. On 19 November 2004, Marsh pleaded guilty to all charges. Marsh was sentenced to two 12-year prison sentences, one each from Georgia and Tennessee, to be served concurrently; he was also sentenced to probation for 75 years following his incarceration. Civil suits were filed against the Marsh family and a number of funeral homes who shipped bodies to Tri-State; these suits were ultimately settled. The property of the Marsh family has been sold, but collection of the full $80-million judgment remains doubtful. Rates The cremation rate varies considerably across countries with Japan reporting a 99.97% cremation rate while Romania reported a rate of 0.5% in 2018. The cremation rate in the United Kingdom has been increasing steadily with the national average rate rising from 34.70% in 1960 to 78.10% in 2019. According to the National Funeral Directors Association, the cremation rate in the United States in 2016 was 50.2% and this was in 2017 expected to increase to 63.8% by 2025 and 78.8% in 2035. See also Antyesti Burial at sea Burial in space Cremation in Japan Death Promession Resomation Sati Self-immolation Tissue digestion References External links The International Cremation Federation (ICF) Death customs Fire Incineration
Cremation
[ "Chemistry", "Engineering" ]
10,646
[ "Combustion engineering", "Incineration", "Combustion", "Cremation", "Fire" ]
175,146
https://en.wikipedia.org/wiki/Rudolf%20Clausius
Rudolf Julius Emanuel Clausius (; 2 January 1822 – 24 August 1888) was a German physicist and mathematician and is considered one of the central founding fathers of the science of thermodynamics. By his restatement of Sadi Carnot's principle known as the Carnot cycle, he gave the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the basic ideas of the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat. Life Clausius was born in Köslin (now Koszalin, Poland) in the Province of Pomerania in Prussia. His father was a Protestant pastor and school inspector, and Rudolf studied in the school of his father. In 1838, he went to the Gymnasium in Stettin. Clausius graduated from the University of Berlin in 1844 where he had studied mathematics and physics since 1840 with, among others, Gustav Magnus, Peter Gustav Lejeune Dirichlet, and Jakob Steiner. He also studied history with Leopold von Ranke. During 1848, he got his doctorate from the University of Halle on optical effects in Earth's atmosphere. In 1850 he became professor of physics at the Royal Artillery and Engineering School in Berlin and Privatdozent at the Berlin University. In 1855 he became professor at the ETH Zürich, the Swiss Federal Institute of Technology in Zürich, where he stayed until 1867. During that year, he moved to Würzburg and two years later, in 1869 to Bonn. In 1870 Clausius organized an ambulance corps in the Franco-Prussian War. He was wounded in battle, leaving him with a lasting disability. He was awarded the Iron Cross for his services. His wife, Adelheid Rimpau died in 1875, leaving him to raise their six children. In 1886, he married Sophie Sack, and then had another child. Two years later, on 24 August 1888, he died in Bonn, Germany. Work Clausius's PhD thesis concerning the refraction of light proposed that we see a blue sky during the day, and various shades of red at sunrise and sunset (among other phenomena) due to reflection and refraction of light. Later, Lord Rayleigh would show that it was in fact due to the scattering of light. His most famous paper, Ueber die bewegende Kraft der Wärme ("On the Moving Force of Heat and the Laws of Heat which may be Deduced Therefrom") was published in 1850, and dealt with the mechanical theory of heat. In this paper, he showed there was a contradiction between Carnot's principle and the concept of conservation of energy. Clausius restated the two laws of thermodynamics to overcome this contradiction. This paper made him famous among scientists. (The third law was developed by Walther Nernst, during the years 1906–1912). Clausius's most famous statement of the second law of thermodynamics was published in German in 1854, and in English in 1856. During 1857, Clausius contributed to the field of kinetic theory after refining August Krönig's very simple gas-kinetic model to include translational, rotational and vibrational molecular motions. In this same work he introduced the concept of 'Mean free path' of a particle. Clausius deduced the Clausius–Clapeyron relation from thermodynamics. This relation, which is a way of characterizing the phase transition between two states of matter such as solid and liquid, had originally been developed in 1834 by Émile Clapeyron. Entropy In 1865, Clausius gave the first mathematical version of the concept of entropy, and also gave it its name. Clausius chose the word because the meaning (from Greek ἐν en "in" and τροπή tropē "transformation") is "content transformative" or "transformation content" ("Verwandlungsinhalt").He used the now abandoned unit 'Clausius' (symbol: Cl) for entropy. 1 Clausius (Cl) = 1 calorie/degree Celsius (cal/°C) = 4.1868 joules per kelvin (J/K) The landmark 1865 paper in which he introduced the concept of entropy ends with the following summary of the first and second laws of thermodynamics: Leon Cooper added that in this way he succeeded in coining a word that meant the same thing to everybody: nothing. Tributes Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland in 1859.IESIS Institution of Engineers and Shipbuilders in Scotland Iron Cross of 1870 Fellow of the Royal Society of London in 1868 and received its Copley Medal in 1879. Member of the Royal Swedish Academy of Sciences in 1878. Huygens Medal in 1870. Foreign Member of the Accademia Nazionale dei Lincei in Rome in 1880 Member of the German Academy of Sciences Leopoldina in 1880 Poncelet Prize in 1883. Honorary doctorate from the University of Würzburg in 1882. Foreign Member of the Royal Netherlands Academy of Arts and Sciences in 1886. Pour le Mérite for Arts and Sciences in 1888 The lunar crater Clausius named in his honor. A memorial in his home town of Koszalin in 2009 Publications English translations of nine papers. See also Hans Peter Jørgen Julius Thomsen, one of the founders of the thermochemistry. References External links Revival of Kinetic Theory by Clausius 1822 births 1888 deaths People from Koszalin Academic staff of ETH Zurich Thermodynamicists German military personnel of the Franco-Prussian War 19th-century German physicists German fluid dynamicists People from the Province of Pomerania Recipients of the Iron Cross (1870) Recipients of the Copley Medal Recipients of the Pour le Mérite (civil class) Humboldt University of Berlin alumni Academic staff of the University of Bonn Academic staff of the University of Würzburg Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Members of the Royal Swedish Academy of Sciences German theoretical physicists Prussian Army personnel
Rudolf Clausius
[ "Physics", "Chemistry" ]
1,272
[ "Thermodynamics", "Thermodynamicists" ]
175,184
https://en.wikipedia.org/wiki/European%20Southern%20Observatory
The European Organisation for Astronomical Research in the Southern Hemisphere, commonly referred to as the European Southern Observatory (ESO), is an intergovernmental research organisation made up of 16 member states for ground-based astronomy. Created in 1962, ESO has provided astronomers with state-of-the-art research facilities and access to the southern sky. The organisation employs over 750 staff members and receives annual member state contributions of approximately €162 million. Its observatories are located in northern Chile. ESO has built and operated some of the largest and most technologically advanced telescopes. These include the 3.6 m New Technology Telescope, an early pioneer in the use of active optics, and the Very Large Telescope (VLT), which consists of four individual 8.2 m telescopes and four smaller auxiliary telescopes which can all work together or separately. The Atacama Large Millimeter Array observes the universe in the millimetre and submillimetre wavelength ranges, and is the world's largest ground-based astronomy project to date. It was completed in March 2013 in an international collaboration by Europe (represented by ESO), North America, East Asia and Chile. Currently under construction is the Extremely Large Telescope. It will use a 39.3-metre-diameter segmented mirror, and become the world's largest optical reflecting telescope when operational towards the end of this decade. Its light-gathering power will allow detailed studies of planets around other stars, the first objects in the universe, supermassive black holes, and the nature and distribution of the dark matter and dark energy which dominate the universe. ESO's observing facilities have made astronomical discoveries and produced several astronomical catalogues. Its findings include the discovery of the most distant gamma-ray burst and evidence for a black hole at the centre of the Milky Way. In 2004, the VLT allowed astronomers to obtain the first picture of an extrasolar planet (2M1207b) orbiting a brown dwarf 173 light-years away. The High Accuracy Radial Velocity Planet Searcher (HARPS) instrument installed on the older ESO 3.6 m telescope led to the discovery of extrasolar planets, including Gliese 581c—one of the smallest planets seen outside the Solar System. History The idea that European astronomers should establish a common large observatory was broached by Walter Baade and Jan Oort at the Leiden Observatory in the Netherlands in spring 1953. It was pursued by Oort, who gathered a group of astronomers in Leiden to consider it on June 21 that year. Immediately thereafter, the subject was further discussed at the Groningen conference in the Netherlands. On January 26, 1954, an ESO declaration was signed by astronomers from six European countries expressing the wish that a joint European observatory be established in the southern hemisphere. At the time, all reflector telescopes with an aperture of 2 metres or more were located in the northern hemisphere. The decision to build the observatory in the southern hemisphere resulted from the necessity of observing the southern sky; some research subjects (such as the central parts of the Milky Way and the Magellanic Clouds) were accessible only from the southern hemisphere. It was initially planned to set up telescopes in South Africa where several European observatories were located (Boyden Observatory), but tests from 1955 to 1962 demonstrated that a site in the Andes was preferable: When Jürgen Stock (astronomer) enthusiastically reported his observations from Chile, Otto Heckmann decided to leave the South African project on hold. ESO - at that time about to sign the contracts with South Africa - decided to establish their observatory in Chile. The ESO Convention was signed 5 October 1962 by Belgium, Germany, France, the Netherlands and Sweden. Otto Heckmann was nominated as the organisation's first director general on 1 November 1962. On November 15, 1963 Chile was chosen as the site for ESO's observatory. A preliminary proposal for a convention of astronomy organisations in these five countries was drafted in 1954. Although some amendments were made in the initial document, the convention proceeded slowly until 1960 when it was discussed during that year's committee meeting. The new draft was examined in detail, and a council member of CERN (the European Organization for Nuclear Research) highlighted the need for a convention between governments (in addition to organisations). The convention and government involvement became pressing due to rapidly rising costs of site-testing expeditions. The final 1962 version was largely adopted from the CERN convention, due to similarities between the organisations and the dual membership of some members. In 1966, the first ESO telescope at the La Silla site in Chile began operating. Because CERN (like ESO) had sophisticated instrumentation, the astronomy organisation frequently turned to the nuclear-research body for advice and a collaborative agreement between ESO and CERN was signed in 1970. Several months later, ESO's telescope division moved into a CERN building in Geneva and ESO's Sky Atlas Laboratory was established on CERN property. ESO's European departments moved into the new ESO headquarters in Garching (near Munich), Germany in 1980. Member states Chilean observation sites Although ESO is headquartered in Germany, its telescopes and observatories are in northern Chile, where the organisation operates advanced ground-based astronomical facilities: La Silla, which hosts the New Technology Telescope (NTT) Paranal, where the Very Large Telescope (VLT) is located Llano de Chajnantor, where ALMA, the Atacama Large Millimeter/submillimeter Array, is located These are among the best locations for astronomical observations in the southern hemisphere. An ESO project is the Extremely Large Telescope (ELT), a 40-metre-class telescope based on a five-mirror design and the formerly planned Overwhelmingly Large Telescope. The ELT will be the largest visible and near-infrared telescope in the world. ESO began its design in early 2006, and aimed to begin construction in 2012. Construction work at the ELT site started in June 2014. As decided by the ESO council on 26 April 2010, a fourth site (Cerro Armazones) is to be home to ELT. Each year about 2,000 requests are made for the use of ESO telescopes, for four to six times more nights than are available. Observations made with these instruments appear in a number of peer-reviewed publications annually; in 2017, more than 1,000 reviewed papers based on ESO data were published. ESO telescopes generate large amounts of data at a high rate, which are stored in a permanent archive facility at ESO headquarters. The archive contains more than 1.5 million images (or spectra) with a total volume of about 65 terabytes (65,000,000,000,000 bytes) of data. La Silla La Silla, located in the southern Atacama Desert north of Santiago de Chile at an altitude of , is the home of ESO's original observation site. Like other observatories in the area, La Silla is far from sources of light pollution and has one of the darkest night skies on Earth. In La Silla, ESO operates three telescopes: a 3.6-metre telescope, the New Technology Telescope (NTT) and the 2.2-metre Max-Planck-ESO Telescope. The observatory hosts visitor instruments, attached to a telescope for the duration of an observational run and then removed. La Silla also hosts national telescopes, such as the 1.2-metre Swiss and the 1.5-metre Danish telescopes. About 300 reviewed publications annually are attributable to the work of the observatory. Discoveries made with La Silla telescopes include the HARPS-spectrograph detection of the planets orbiting within the Gliese 581 planetary system, which contains the first known rocky planet in a habitable zone outside the solar system. Several telescopes at La Silla played a role in linking gamma-ray bursts, the most energetic explosions in the universe since the Big Bang, with the explosions of massive stars. The ESO La Silla Observatory also played a role in the study of supernova SN 1987A. ESO 3.6-metre telescope The ESO 3.6-metre telescope began operations in 1977. It has been upgraded, including the installation of a new secondary mirror. The conventionally designed horseshoe-mount telescope was primarily used for infrared spectroscopy; it now hosts the HARPS spectrograph, used in search of extra-solar planets and for asteroseismology. The telescope was designed for very high long-term radial velocity accuracy (on the order of 1 m/s). New Technology Telescope The New Technology Telescope (NTT) is an altazimuth, 3.58-metre Ritchey–Chrétien telescope, inaugurated in 1989 and the first in the world with a computer-controlled main mirror. The flexible mirror's shape is adjusted during observation to preserve optimal image quality. The secondary mirror position is also adjustable in three directions. This technology (developed by ESO and known as active optics) is now applied to all major telescopes, including the VLT and the future ELT. The design of the octagonal enclosure housing the NTT is innovative. The telescope dome is relatively small and ventilated by a system of flaps directing airflow smoothly across the mirror, reducing turbulence and resulting in sharper images. MPG/ESO 2.2-metre telescope The 2.2-metre telescope has been in operation at La Silla since early 1984, and is on indefinite loan to ESO from the Max Planck Society (Max-Planck-Gesellschaft zur Förderung der Wissenschaften, or MPG, in German). Telescope time is shared between MPG and ESO observing programmes, while operation and maintenance of the telescope are ESO's responsibility. Its instrumentation includes a 67-million-pixel wide-field imager (WFI) with a field of view as large as the full moon, which has taken many images of celestial objects. Other instruments used are GROND (Gamma-Ray Burst Optical Near-Infrared Detector), which seeks the afterglow of gamma-ray bursts—the most powerful explosions in the universe, and the high-resolution spectrograph FEROS (Fiber-fed Extended Range Optical Spectrograph), used to make detailed studies of stars. Other telescopes La Silla also hosts several national and project telescopes not operated by ESO. Among them are the Swiss Euler Telescope, the Danish National Telescope and the REM, TRAPPIST and TAROT telescopes. The Euler Telescope is a 1.2-metre telescope built and operated by the Geneva Observatory in Switzerland. It is used to conduct high-precision radial velocity measurements primarily used in the search for large extrasolar planets in the southern celestial hemisphere. Its first discovery was a planet orbiting Gliese 86. Other observing programmes focus on variable stars, asteroseismology, gamma-ray bursts, monitoring active galactic nuclei (AGN) and gravitational lenses. The 1.54-metre Danish National Telescope was built by Grubb-Parsons and has been in use at La Silla since 1979. The telescope has an off-axis mount, and the optics are a Ritchey-Chrétien design. Because of the telescope's mount and limited space inside the dome, it has significant pointing restrictions. The Rapid Eye Mount telescope is a small rapid-reaction automatic telescope with a primary mirror. The telescope, in an altazimuth mount, began operation in October 2002. The primary purpose of the telescope is to follow the afterglow of the GRBs detected by the Swift Gamma-Ray Burst Mission satellite. The Belgian TRAPPIST is a joint venture between the University of Liège and Geneva Observatory. The 0.60-metre telescope is specialised in comets, exoplanets, and was one of the few telescopes that observed a stellar occultation of the dwarf planet Eris, revealing that it may be smaller than Pluto. The Quick-action telescope for transient objects, TAROT, is a very fast-moving optical robotic telescope able to observe a gamma-ray burst from its beginning. Satellites detecting GRBs send signals to TAROT, which can provide a sub-arc second position to the astronomical community. Data from the TAROT telescope is also useful in studying the evolution of GRBs, the physics of a fireball and its surrounding material. It is operated from the Haute-Provence Observatory in France. Paranal The Paranal Observatory is located atop Cerro Paranal in the Atacama Desert in northern Chile. Cerro Paranal is a mountain about south of Antofagasta and from the Pacific coast. The observatory has seven major telescopes operating in visible and infrared light: the four telescopes of the Very Large Telescope, the VLT Survey Telescope (VST) and the Visible and Infrared Survey Telescope for Astronomy. In addition, there are four auxiliary telescopes forming an array used for interferometric observations. In March 2008, Paranal was the location for several scenes of the 22nd James Bond film, Quantum of Solace. Very Large Telescope The main facility at Paranal is the VLT, which consists of four nearly identical unit telescopes (UTs), each hosting two or three instruments. These large telescopes can also work together in groups of two or three as a giant interferometer. The ESO Very Large Telescope Interferometer (VLTI) allows astronomers to see details up to 25 times finer than those seen with the individual telescopes. The light beams are combined in the VLTI with a complex system of mirrors in tunnels, where the light paths must diverge less than 1/1000 mm over 100 metres. The VLTI can achieve an angular resolution of milliarcseconds, equivalent to the ability to see the headlights of a car on the Moon. The first of the UTs had its first light in May 1998, and was offered to the astronomical community on 1 April 1999. The other telescopes followed suit in 1999 and 2000, making the VLT fully operational. Four 1.8-metre auxiliary telescopes (ATs), installed between 2004 and 2007, have been added to the VLTI for accessibility when the UTs are used for other projects. Data from the VLT have led to the publication of an average of more than one peer-reviewed scientific paper per day; in 2017, over 600 reviewed scientific papers were published based on VLT data. The VLT's scientific discoveries include imaging an extrasolar planet, tracking individual stars moving around the supermassive black hole at the centre of the Milky Way and observing the afterglow of the furthest known gamma-ray burst. At the Paranal inauguration in March 1999, names of celestial objects in the Mapuche language were chosen to replace the technical designations of the four VLT Unit Telescopes (UT1–UT4). An essay contest was prior arranged for schoolchildren in the region concerning the meaning of these names which attracted many entries dealing with the cultural heritage of ESO's host country. A 17-year-old adolescent from Chuquicamata, near Calama, submitted the winning essay and was awarded an amateur telescope during the inauguration. The four unit telescopes, UT1, UT2, UT3 and UT4, are since known as Antu (sun), Kueyen (moon), Melipal (Southern Cross), and Yepun (Evening Star), with the latter having been originally mistranslated as "Sirius", instead of "Venus". Survey telescopes Visible and Infrared Survey Telescope for Astronomy (VISTA) is housed on the peak adjacent to the one hosting the VLT, sharing observational conditions. VISTA's main mirror is across, a highly curved mirror for its size and quality. Its deviations from a perfect surface are less than a few thousandths the thickness of a human hair, and its construction and polishing presented a challenge. VISTA was conceived and developed by a consortium of 18 universities in the United Kingdom led by Queen Mary, University of London, and it became an in-kind contribution to ESO as part of the UK's ratification agreement. The telescope's design and construction were managed by the Science and Technology Facilities Council's UK Astronomy Technology Centre (STFC, UK ATC). Provisional acceptance of VISTA was formally granted by ESO at the December 2009 ceremony at ESO headquarters in Garching, which was attended by representatives of Queen Mary, University of London and STFC. Since then the telescope has been operated by ESO, capturing quality images since it began operation. The VLT Survey Telescope (VST) is a state-of-the-art, telescope equipped with OmegaCAM, a 268-megapixel CCD camera with a field of view four times the area of the full moon. It complements VISTA by surveying the sky in visible light. The VST (which became operational in 2011) is the result of a joint venture between ESO and the Astronomical Observatory of Capodimonte (Naples), a research centre at the Italian National Institute for Astrophysics INAF. The scientific goals of both surveys range from the nature of dark energy to assessing near-Earth objects. Teams of European astronomers will conduct the surveys; some will cover most of the southern sky, while others will focus on smaller areas. VISTA and the VST are expected to produce large amounts of data; a single picture taken by VISTA has 67 megapixels, and images from OmegaCam (on the VST) will have 268 megapixels. The two survey telescopes collect more data every night than all the other instruments on the VLT combined. The VST and VISTA produce more than 100 terabytes of data per year. Llano de Chajnantor The Llano de Chajnantor is a plateau in the Atacama Desert, about east of San Pedro de Atacama. The site is higher than the Mauna Kea Observatory and higher than the Very Large Telescope on Cerro Paranal. It is dry and inhospitable to humans, but a good site for submillimetre astronomy; because water vapour molecules in Earth's atmosphere absorb and attenuate submillimetre radiation, a dry site is required for this type of radio astronomy. The telescopes are: Atacama Cosmology Telescope (ACT; not operated by ESO) Atacama Pathfinder Experiment (Operated on behalf of the Max Planck Institute for Radio Astronomy (MPIfR)) Atacama Large Millimeter Array Q/U Imaging Experiment (QUIET; not operated by ESO) POLARBEAR (on the Huan Tran Telescope; not operated by ESO) ALMA is a telescope designed for millimetre and submillimetre astronomy. This type of astronomy is a relatively unexplored frontier, revealing a universe which cannot be seen in more-familiar visible or infrared light and ideal for studying the "cold universe"; light at these wavelengths shines from vast cold clouds in interstellar space at temperatures only a few tens of degrees above absolute zero. Astronomers use this light to study the chemical and physical conditions in these molecular clouds, the dense regions of gas and cosmic dust where new stars are being born. Seen in visible light, these regions of the universe are often dark and obscure due to dust; however, they shine brightly in the millimetre and submillimetre portions of the electromagnetic spectrum. This wavelength range is also ideal for studying some of the earliest (and most distant) galaxies in the universe, whose light has been redshifted into longer wavelengths from the expansion of the universe. Atacama Pathfinder Experiment ESO hosts the Atacama Pathfinder Experiment, APEX, and operates it on behalf of the Max Planck Institute for Radio Astronomy (MPIfR). APEX is a diameter telescope, operating at millimetre and submillimetre wavelengths — between infrared light and radio waves. Atacama Large Millimeter/submillimeter Array ALMA is an astronomical interferometer initially composed of 66 high-precision antennas and operating at wavelengths of 0.3 to 3.6 mm. Its main array will have 50 antennas acting as a single interferometer. An additional compact array of four 12-metre and twelve antennas, known as the Morita array is also available. The antennas can be arranged across the desert plateau over distances from 150 metres to , which will give ALMA a variable "zoom". The array will be able to probe the universe at millimetre and submillimeter wavelengths with unprecedented sensitivity and resolution, with vision up to ten times sharper than the Hubble Space Telescope. These images will complement those made with the VLT Interferometer. ALMA is a collaboration between East Asia (Japan and Taiwan), Europe (ESO), North America (US and Canada) and Chile. The scientific goals of ALMA include studying the origin and formation of stars, galaxies, and planets with observations of molecular gas and dust, studying distant galaxies towards the edge of the observable universe and studying relic radiation from the Big Bang. A call for ALMA science proposals was issued on 31 March 2011, and early observations began on 3 October. Outreach Outreach activities are carried out by the ESO education and Public Outreach Department (ePOD). ePOD also manages the ESO Supernova Planetarium & Visitor Centre, an astronomy centre located at the site of the ESO Headquarters in Garching bei München, which was inaugurated 26 April 2018. Video gallery See also References Bibliography External links ESO Astronomical observatories in Chile Astronomy institutes and departments Atacama Desert International scientific organizations based in Europe International organisations based in Germany Organisations based in Munich Organizations established in 1962 Science and technology in Europe Articles containing video clips 1962 establishments in Chile
European Southern Observatory
[ "Astronomy" ]
4,442
[ "Astronomy organizations", "Astronomy institutes and departments" ]
175,205
https://en.wikipedia.org/wiki/Integrated%20services
In computer networking, integrated services or IntServ is an architecture that specifies the elements to guarantee quality of service (QoS) on networks. IntServ can for example be used to allow video and sound to reach the receiver without interruption. IntServ specifies a fine-grained QoS system, which is often contrasted with DiffServ's coarse-grained control system. Under IntServ, every router in the system implements IntServ, and every application that requires some kind of QoS guarantee has to make an individual reservation. Flow specs describe what the reservation is for, while RSVP is the underlying mechanism to signal it across the network. Flow specs There are two parts to a flow spec: What does the traffic look like? Done in the Traffic SPECification part, also known as TSPEC. What guarantees does it need? Done in the service Request SPECification part, also known as RSPEC. TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how 'bursty' the traffic is allowed to be. TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750 Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with less tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being burstier. RSPECs specify what requirements there are for the flow: it can be normal internet 'best effort', in which case no reservation is needed. This setting is likely to be used for webpages, FTP, and similar applications. The 'Controlled Load' setting mirrors the performance of a lightly loaded network: there may be occasional glitches when two people access the same resource by chance, but generally both delay and drop rate are fairly constant at the desired rate. This setting is likely to be used by soft QoS applications. The 'Guaranteed' setting gives an absolutely bounded service, where the delay is promised to never go above a desired amount, and packets never dropped, provided the traffic stays within spec. RSVP The Resource Reservation Protocol (RSVP) is described in RFC 2205. All machines on the network capable of sending QoS data send a PATH message every 30 seconds, which spreads out through the networks. Those who want to listen to them send a corresponding RESV (short for "Reserve") message which then traces the path backwards to the sender. The RESV message contains the flow specs. The routers between the sender and listener have to decide if they can support the reservation being requested, and, if they cannot, they send a reject message to let the listener know about it. Otherwise, once they accept the reservation they have to carry the traffic. The routers then store the nature of the flow, and also police it. This is all done in soft state, so if nothing is heard for a certain length of time, then the reader will time out and the reservation will be cancelled. This solves the problem if either the sender or the receiver crash or are shut down incorrectly without first cancelling the reservation. The individual routers may, at their option, police the traffic to check that it conforms to the flow specs. Problems In order for IntServ to work, all routers along the traffic path must support it. Furthermore, many states must be stored in each router. As a result, IntServ works on a small-scale, but as the system scales up to larger networks or the Internet, it becomes resource intensive to track of all of the reservations. One way to solve the scalability problem is by using a multi-level approach, where per-microflow resource reservation (such as resource reservation for individual users) is done in the edge network, while in the core network resources are reserved for aggregate flows only. The routers that lie between these different levels must adjust the amount of aggregate bandwidth reserved from the core network so that the reservation requests for individual flows from the edge network can be better satisfied. References "Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice" by John Evans, Clarence Filsfils (Morgan Kaufmann, 2007, ) External links - Integrated Services in the Internet Architecture: an Overview - Specification of the Controlled-Load Network Element Service - Specification of Guaranteed Quality of Service - General Characterization Parameters for Integrated Service Network Elements - Resource ReSerVation Protocol (RSVP) Cisco.com, Cisco Whitepaper about IntServ and DiffServ Internet Standards Internet architecture Quality of service
Integrated services
[ "Technology" ]
1,092
[ "Internet architecture", "IT infrastructure" ]
175,217
https://en.wikipedia.org/wiki/Rhizome
In botany and dendrology, a rhizome ( ) is a modified subterranean plant stem that sends out roots and shoots from its nodes. Rhizomes are also called creeping rootstalks or just rootstalks. Rhizomes develop from axillary buds and grow horizontally. The rhizome also retains the ability to allow new shoots to grow upwards. A rhizome is the main stem of the plant that runs typically underground and horizontally to the soil surface. Rhizomes have nodes and internodes and auxiliary buds. Roots do not have nodes and internodes and have a root cap terminating their ends. In general, rhizomes have short internodes, send out roots from the bottom of the nodes, and generate new upward-growing shoots from the top of the nodes. A stolon is similar to a rhizome, but stolon sprouts from an existing stem having long internodes and generating new shoots at the ends, they are often also called runners such as in the strawberry plant. A stem tuber is a thickened part of a rhizome or stolon that has been enlarged for use as a storage organ. In general, a tuber is high in starch, e.g. the potato, which is a modified stolon. The term "tuber" is often used imprecisely and is sometimes applied to plants with rhizomes. The plant uses the rhizome to store starches, proteins, and other nutrients. These nutrients become useful for the plant when new shoots must be formed or when the plant dies back for the winter. If a rhizome is separated, each piece may be able to give rise to a new plant. This is a process known as vegetative reproduction and is used by farmers and gardeners to propagate certain plants. This also allows for lateral spread of grasses like bamboo and bunch grasses. Examples of plants that are propagated this way include hops, asparagus, ginger, irises, lily of the valley, cannas, and sympodial orchids. Stored rhizomes are subject to bacterial and fungal infections, making them unsuitable for replanting and greatly diminishing stocks. However, rhizomes can also be produced artificially from tissue cultures. The ability to easily grow rhizomes from tissue cultures leads to better stocks for replanting and greater yields. The plant hormones ethylene and jasmonic acid have been found to help induce and regulate the growth of rhizomes, specifically in rhubarb. Ethylene that was applied externally was found to affect internal ethylene levels, allowing easy manipulations of ethylene concentrations. Knowledge of how to use these hormones to induce rhizome growth could help farmers and biologists to produce plants grown from rhizomes, and more easily cultivate and grow better plants. Some plants have rhizomes that grow above ground or that lie at the soil surface, including some Iris species as well as ferns, whose spreading stems are rhizomes. Plants with underground rhizomes include gingers, bamboo, snake plant, the Venus flytrap, Chinese lantern, western poison-oak, hops, and Alstroemeria, and some grasses, such as Johnson grass, Bermuda grass, and purple nut sedge. Rhizomes generally form a single layer, but in giant horsetails, can be multi-tiered. Many rhizomes have culinary value, and some, such as zhe'ergen, are commonly consumed raw. Some rhizomes that are used directly in cooking include ginger, turmeric, galangal, fingerroot, and lotus. See also Aspen Bulb Corm Mycorrhiza Tuber Explanatory notes References External links Plant anatomy Plant physiology Plant reproduction Plant roots Plant stem morphology
Rhizome
[ "Biology" ]
812
[ "Plant physiology", "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
175,285
https://en.wikipedia.org/wiki/Relational%20algebra
In database theory, relational algebra is a theory that uses algebraic structures for modeling data and defining queries on it with well founded semantics. The theory was introduced by Edgar F. Codd. The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations. The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results). Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth. Introduction Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages. Relational algebra operates on homogeneous sets of tuples where we commonly interpret m to be the number of rows of tuples in a table and n to be the number of columns. All entries in each column have the same type. A relation also has a unique tuple called the header which gives each column a unique name or attribute inside the relation). Attributes are used in projections and selections. Set operators The relational algebra uses set union, set difference, and Cartesian product from set theory, and adds additional constraints to these operators to create new ones. For set union and set difference, the two relations involved must be union-compatible—that is, the two relations must have the same set of attributes. Because set intersection is defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible. For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name. In addition, the Cartesian product is defined differently from the one in set theory in the sense that tuples are considered to be "shallow" for the purposes of the operation. That is, the Cartesian product of a set of n-tuples with a set of m-tuples yields a set of "flattened" -tuples (whereas basic set theory would have prescribed a set of 2-tuples, each containing an n-tuple and an m-tuple). More formally, R × S is defined as follows: The cardinality of the Cartesian product is the product of the cardinalities of its factors, that is, |R × S| = |R| × |S|. Projection A projection () is a unary operation written as where is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in R are restricted to the set . Note: when implemented in SQL standard the "default projection" returns a multiset instead of a set, and the projection to eliminate duplicate data is obtained by the addition of the DISTINCT keyword. Selection A generalized selection (σ) is a unary operation written as where is a propositional formula that consists of atoms as allowed in the normal selection and the logical operators (and), (or) and (negation). This selection selects all those tuples in R for which holds. To obtain a listing of all friends or business associates in an address book, the selection might be written as . The result would be a relation containing every attribute of every unique record where is true or where is true. Rename A rename (ρ) is a unary operation written as where the result is identical to R except that the b attribute in all tuples is renamed to an a attribute. This is commonly used to rename the attribute of a relation for the purpose of a join. To rename the "isFriend" attribute to "isBusinessContact" in a relation, might be used. There is also the notation, where R is renamed to x and the attributes are renamed to . Joins and join-like operators Natural join Natural join (⨝) is a binary operator that is written as (R ⨝ S) where R and S are relations. The result of the natural join is the set of all combinations of tuples in R and S that are equal on their common attribute names. For an example consider the tables Employee and Dept and their natural join: Note that neither the employee named Mary nor the Production department appear in the result. Mary does not appear in the result because Mary's Department, "Human Resources", is not listed in the Dept relation and the Production department does not appear in the result because there are no tuples in the Employee relation that have "Production" as their DeptName attribute. This can also be used to define composition of relations. For example, the composition of Employee and Dept is their join as shown above, projected on all but the common attribute DeptName. In category theory, the join is precisely the fiber product. The natural join is arguably one of the most important operators since it is the relational counterpart of the logical AND operator. Note that if the same variable appears in each of two predicates that are connected by AND, then that variable stands for the same thing and both appearances must always be substituted by the same value (this is a consequence of the idempotence of the logical AND). In particular, natural join allows the combination of relations that are associated by a foreign key. For example, in the above example a foreign key probably holds from Employee.DeptName to Dept.DeptName and then the natural join of Employee and Dept combines all employees with their departments. This works because the foreign key holds between attributes with the same name. If this is not the case such as in the foreign key from Dept.Manager to Employee.Name then these columns must be renamed before taking the natural join. Such a join is sometimes also referred to as an equijoin. More formally the semantics of the natural join are defined as follows: where Fun(t) is a predicate that is true for a relation t (in the mathematical sense) iff t is a function (that is, t does not map any attribute to multiple values). It is usually required that R and S must have at least one common attribute, but if this constraint is omitted, and R and S have no common attributes, then the natural join becomes exactly the Cartesian product. The natural join can be simulated with Codd's primitives as follows. Assume that c1,...,cm are the attribute names common to R and S, r1,...,rn are the attribute names unique to R and s1,...,sk are the attribute names unique to S. Furthermore, assume that the attribute names x1,...,xm are neither in R nor in S. In a first step the common attribute names in S can be renamed: Then we take the Cartesian product and select the tuples that are to be joined: Finally we take a projection to get rid of the renamed attributes: θ-join and equijoin Consider tables Car and Boat which list models of cars and boats and their respective prices. Suppose a customer wants to buy a car and a boat, but she does not want to spend more money for the boat than for the car. The θ-join (⋈θ) on the predicate CarPrice ≥ BoatPrice produces the flattened pairs of rows which satisfy the predicate. When using a condition where the attributes are equal, for example Price, then the condition may be specified as Price=Price or alternatively (Price) itself. In order to combine tuples from two relations where the combination condition is not simply the equality of shared attributes it is convenient to have a more general form of join operator, which is the θ-join (or theta-join). The θ-join is a binary operator that is written as or where a and b are attribute names, θ is a binary relational operator in the set }, υ is a value constant, and R and S are relations. The result of this operation consists of all combinations of tuples in R and S that satisfy θ. The result of the θ-join is defined only if the headers of S and R are disjoint, that is, do not contain a common attribute. The simulation of this operation in the fundamental operations is therefore as follows: R ⋈θ S = σθ(R × S) In case the operator θ is the equality operator (=) then this join is also called an equijoin. Note, however, that a computer language that supports the natural join and selection operators does not need θ-join as well, as this can be achieved by selection from the result of a natural join (which degenerates to Cartesian product when there are no shared attributes). In SQL implementations, joining on a predicate is usually called an inner join, and the on keyword allows one to specify the predicate used to filter the rows. It is important to note: forming the flattened Cartesian product then filtering the rows is conceptually correct, but an implementation would use more sophisticated data structures to speed up the join query. Semijoin The left semijoin (⋉ and ⋊) is a joining similar to the natural join and written as where and are relations. The result is the set of all tuples in for which there is a tuple in that is equal on their common attribute names. The difference from a natural join is that other columns of do not appear. For example, consider the tables Employee and Dept and their semijoin: More formally the semantics of the semijoin can be defined as follows: where is as in the definition of natural join. The semijoin can be simulated using the natural join as follows. If are the attribute names of , then Since we can simulate the natural join with the basic operators it follows that this also holds for the semijoin. In Codd's 1970 paper, semijoin is called restriction. Antijoin The antijoin (▷), written as R ▷ S where R and S are relations, is similar to the semijoin, but the result of an antijoin is only those tuples in R for which there is no tuple in S that is equal on their common attribute names. For an example consider the tables Employee and Dept and their antijoin: The antijoin is formally defined as follows: } or } where is as in the definition of natural join. The antijoin can also be defined as the complement of the semijoin, as follows: Given this, the antijoin is sometimes called the anti-semijoin, and the antijoin operator is sometimes written as semijoin symbol with a bar above it, instead of ▷. In the case where the relations have the same attributes (union-compatible), antijoin is the same as minus. Division The division (÷) is a binary operation that is written as R ÷ S. Division is not implemented directly in SQL. The result consists of the restrictions of tuples in R to the attribute names unique to R, i.e., in the header of R but not in the header of S, for which it holds that all their combinations with tuples in S are present in R. Example If DBProject contains all the tasks of the Database project, then the result of the division above contains exactly the students who have completed both of the tasks in the Database project. More formally the semantics of the division is defined as follows:where {a1,...,an} is the set of attribute names unique to R and t[a1,...,an] is the restriction of t to this set. It is usually required that the attribute names in the header of S are a subset of those of R because otherwise the result of the operation will always be empty. The simulation of the division with the basic operations is as follows. We assume that a1,...,an are the attribute names unique to R and b1,...,bm are the attribute names of S. In the first step we project R on its unique attribute names and construct all combinations with tuples in S: T := πa1,...,an(R) × S In the prior example, T would represent a table such that every Student (because Student is the unique key / attribute of the Completed table) is combined with every given Task. So Eugene, for instance, would have two rows, Eugene → Database1 and Eugene → Database2 in T. EG: First, let's pretend that "Completed" has a third attribute called "grade". It's unwanted baggage here, so we must project it off always. In fact in this step we can drop "Task" from R as well; the multiply puts it back on. T := πStudent(R) × S // This gives us every possible desired combination, including those that don't actually exist in R, and excluding others (eg Fred | compiler1, which is not a desired combination) In the next step we subtract R from T relation: U := T − R In U we have the possible combinations that "could have" been in R, but weren't. EG: Again with projections — T and R need to have identical attribute names/headers. U := T − πStudent,Task(R) // This gives us a "what's missing" list. So if we now take the projection on the attribute names unique to R then we have the restrictions of the tuples in R for which not all combinations with tuples in S were present in R: V := πa1,...,an(U) EG: Project U down to just the attribute(s) in question (Student) V := πStudent(U) So what remains to be done is take the projection of R on its unique attribute names and subtract those in V: W := πa1,...,an(R) − V EG: W := πStudent(R) − V. Common extensions In practice the classical relational algebra described above is extended with various operations such as outer joins, aggregate functions and even transitive closure. Outer joins Whereas the result of a join (or inner join) consists of tuples formed by combining matching tuples in the two operands, an outer join contains those tuples and additionally some tuples formed by extending an unmatched tuple in one of the operands by "fill" values for each of the attributes of the other operand. Outer joins are not considered part of the classical relational algebra discussed so far. The operators defined in this section assume the existence of a null value, ω, which we do not define, to be used for the fill values; in practice this corresponds to the NULL in SQL. In order to make subsequent selection operations on the resulting table meaningful, a semantic meaning needs to be assigned to nulls; in Codd's approach the propositional logic used by the selection is extended to a three-valued logic, although we elide those details in this article. Three outer join operators are defined: left outer join, right outer join, and full outer join. (The word "outer" is sometimes omitted.) Left outer join The left outer join (⟕) is written as R ⟕ S where R and S are relations. The result of the left outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition (loosely speaking) to tuples in R that have no matching tuples in S. For an example consider the tables Employee and Dept and their left outer join: In the resulting relation, tuples in S which have no common values in common attribute names with tuples in R take a null value, ω. Since there are no tuples in Dept with a DeptName of Finance or Executive, ωs occur in the resulting relation where tuples in Employee have a DeptName of Finance or Executive. Let r1, r2, ..., rn be the attributes of the relation R and let {(ω, ..., ω)} be the singleton relation on the attributes that are unique to the relation S (those that are not attributes of R). Then the left outer join can be described in terms of the natural join (and hence using basic operators) as follows: Right outer join The right outer join (⟖) behaves almost identically to the left outer join, but the roles of the tables are switched. The right outer join of relations R and S is written as R ⟖ S. The result of the right outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R. For example, consider the tables Employee and Dept and their right outer join: In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω. Since there are no tuples in Employee with a DeptName of Production, ωs occur in the Name and EmpId attributes of the resulting relation where tuples in Dept had DeptName of Production. Let s1, s2, ..., sn be the attributes of the relation S and let {(ω, ..., ω)} be the singleton relation on the attributes that are unique to the relation R (those that are not attributes of S). Then, as with the left outer join, the right outer join can be simulated using the natural join as follows: Full outer join The outer join (⟗) or full outer join in effect combines the results of the left and right outer joins. The full outer join is written as R ⟗ S where R and S are relations. The result of the full outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R and tuples in R that have no matching tuples in S in their common attribute names. For an example consider the tables Employee and Dept and their full outer join: In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω. Tuples in S which have no common values in common attribute names with tuples in R also take a null value, ω. The full outer join can be simulated using the left and right outer joins (and hence the natural join and set union) as follows: R ⟗ S = (R ⟕ S) ∪ (R ⟖ S) Operations for domain computations There is nothing in relational algebra introduced so far that would allow computations on the data domains (other than evaluation of propositional expressions involving equality). For example, it is not possible using only the algebra introduced so far to write an expression that would multiply the numbers from two columns, e.g. a unit price with a quantity to obtain a total price. Practical query languages have such facilities, e.g. the SQL SELECT allows arithmetic operations to define new columns in the result SELECT unit_price * quantity AS total_price FROM t, and a similar facility is provided more explicitly by Tutorial D's EXTEND keyword. In database theory, this is called extended projection. Aggregation Furthermore, computing various functions on a column, like the summing up of its elements, is also not possible using the relational algebra introduced so far. There are five aggregate functions that are included with most relational database systems. These operations are Sum, Count, Average, Maximum and Minimum. In relational algebra the aggregation operation over a schema (A1, A2, ... An) is written as follows: where each Aj', 1 ≤ j ≤ k, is one of the original attributes Ai, 1 ≤ i ≤ n. The attributes preceding the g are grouping attributes, which function like a "group by" clause in SQL. Then there are an arbitrary number of aggregation functions applied to individual attributes. The operation is applied to an arbitrary relation r. The grouping attributes are optional, and if they are not supplied, the aggregation functions are applied across the entire relation to which the operation is applied. Let's assume that we have a table named with three columns, namely and . We wish to find the maximum balance of each branch. This is accomplished by GMax()(). To find the highest balance of all accounts regardless of branch, we could simply write GMax()(). Grouping is often written as ɣMax()() instead. Transitive closure Although relational algebra seems powerful enough for most practical purposes, there are some simple and natural operators on relations that cannot be expressed by relational algebra. One of them is the transitive closure of a binary relation. Given a domain D, let binary relation R be a subset of D×D. The transitive closure R+ of R is the smallest subset of D×D that contains R and satisfies the following condition: It can be proved using the fact that there is no relational algebra expression E(R) taking R as a variable argument that produces R+. SQL however officially supports such fixpoint queries since 1999, and it had vendor-specific extensions in this direction well before that. Implementations The first query language to be based on Codd's algebra was Alpha, developed by Dr. Codd himself. Subsequently, ISBL was created, and this pioneering work has been acclaimed by many authorities as having shown the way to make Codd's idea into a useful language. Business System 12 was a short-lived industry-strength relational DBMS that followed the ISBL example. In 1998 Chris Date and Hugh Darwen proposed a language called Tutorial D intended for use in teaching relational database theory, and its query language also draws on ISBL's ideas. Rel is an implementation of Tutorial D. Bmg is an implementation of relational algebra in Ruby which closely follows the principles of Tutorial D and The Third Manifesto. Even the query language of SQL is loosely based on a relational algebra, though the operands in SQL (tables) are not exactly relations and several useful theorems about the relational algebra do not hold in the SQL counterpart (arguably to the detriment of optimisers and/or users). The SQL table model is a bag (multiset), rather than a set. For example, the expression is a theorem for relational algebra on sets, but not for relational algebra on bags. See also Cartesian product Codd's theorem D4 (programming language) (an implementation of D) Data modeling Database Datalog Logic of relatives Object-role modeling Projection (mathematics) Projection (relational algebra) Projection (set theory) Relation Relation (database) Relation algebra Relation composition Relation construction Relational calculus Relational database Relational model SQL Theory of relations Triadic relation Tuple relational calculus Notes References Further reading (For relationship with cylindric algebras). External links RAT Relational Algebra Translator Free software to convert relational algebra to SQL Lecture Videos: Relational Algebra Processing - An introduction to how database systems process relational algebra Lecture Notes: Relational Algebra – A quick tutorial to adapt SQL queries into relational algebra Relational – A graphic implementation of the relational algebra Query Optimization This paper is an introduction into the use of the relational algebra in optimizing queries, and includes numerous citations for more in-depth study. Relational Algebra System for Oracle and Microsoft SQL Server Pireal – An experimental educational tool for working with Relational Algebra DES – An educational tool for working with Relational Algebra and other formal languages RelaX - Relational Algebra Calculator (open-source software available as an online service without registration) RA: A Relational Algebra Interpreter Translating SQL to Relational Algebra Relational model Database management systems
Relational algebra
[ "Mathematics" ]
5,054
[ "Fields of abstract algebra", "Mathematical relations", "Relational algebra" ]
175,286
https://en.wikipedia.org/wiki/Tuple%20relational%20calculus
Tuple calculus is a calculus that was created and introduced by Edgar F. Codd as part of the relational model, in order to provide a declarative database-query language for data manipulation in this data model. It formed the inspiration for the database-query languages QUEL and SQL, of which the latter, although far less faithful to the original relational model and calculus, is now the de facto standard database-query language; a dialect of SQL is used by nearly every relational-database-management system. Michel Lacroix and Alain Pirotte proposed domain calculus, which is closer to first-order logic and together with Codd showed that both of these calculi (as well as relational algebra) are equivalent in expressive power. Subsequently, query languages for the relational model were called relationally complete if they could express at least all of these queries. Definition Relational database Since the calculus is a query language for relational databases we first have to define a relational database. The basic relational building block is the domain (somewhat similar, but not equal to, a data type). A tuple is a finite sequence of attributes, which are ordered pairs of domains and values. A relation is a set of (compatible) tuples. Although these relational concepts are mathematically defined, those definitions map loosely to traditional database concepts. A table is an accepted visual representation of a relation; a tuple is similar to the concept of a row. We first assume the existence of a set C of column names, examples of which are "name", "author", "address", etcetera. We define headers as finite subsets of C. A relational database schema is defined as a tuple S = (D, R, h) where D is the domain of atomic values (see relational model for more on the notions of domain and atomic value), R is a finite set of relation names, and h : R → 2C a function that associates a header with each relation name in R. (Note that this is a simplification from the full relational model where there is more than one domain and a header is not just a set of column names but also maps these column names to a domain.) Given a domain D we define a tuple over D as a partial function that maps some column names to an atomic value in D. An example would be (name : "Harry", age : 25). t : C ⇸ D The set of all tuples over D is denoted as TD. The subset of C for which a tuple t is defined is called the domain of t (not to be confused with the domain in the schema) and denoted as dom(t). Finally we define a relational database given a schema S = (D, R, h) as a function db : R → 2TD that maps the relation names in R to finite subsets of TD, such that for every relation name r in R and tuple t in db(r) it holds that dom(t) = h(r). The latter requirement simply says that all the tuples in a relation should contain the same column names, namely those defined for it in the schema. Atoms For the construction of the formulas we will assume an infinite set V of tuple variables. The formulas are defined given a database schema S = (D, R, h) and a partial function type : V ⇸ 2C, called at type assignment, that assigns headers to some tuple variables. We then define the set of atomic formulas A[S,type] with the following rules: if v and w in V, a in type(v) and b in type(w) then the formula v.a = w.b is in A[S,type], if v in V, a in type(v) and k denotes a value in D then the formula v.a = k is in A[S,type], and if v in V, r in R and type(v) = h(r) then the formula r(v) is in A[S,type]. Examples of atoms are: (t.age = s.age) — t has an age attribute and s has an age attribute with the same value (t.name = "Codd") — tuple t has a name attribute and its value is "Codd" Book(t) — tuple t is present in relation Book. The formal semantics of such atoms is defined given a database db over S and a tuple variable binding val : V → TD that maps tuple variables to tuples over the domain in S: v.a = w.b is true if and only if val(v)(a) = val(w)(b) v.a = k is true if and only if val(v)(a) = k r(v) is true if and only if val(v) is in db(r) Formulas The atoms can be combined into formulas, as is usual in first-order logic, with the logical operators ∧ (and), ∨ (or) and ¬ (not), and we can use the existential quantifier (∃) and the universal quantifier (∀) to bind the variables. We define the set of formulas F[S,type] inductively with the following rules: every atom in A[S,type] is also in F[S,type] if f1 and f2 are in F[S,type] then the formula f1 ∧ f2 is also in F[S,type] if f1 and f2 are in F[S,type] then the formula f1 ∨ f2 is also in F[S,type] if f is in F[S,type] then the formula ¬ f is also in F[S,type] if v in V, H a header and f a formula in F[S,type[v->H]] then the formula ∃ v : H ( f ) is also in F[S,type], where type[v->H] denotes the function that is equal to type except that it maps v to H, if v in V, H a header and f a formula in F[S,type[v->H]] then the formula ∀ v : H ( f ) is also in F[S,type] Examples of formulas: t.name = "C. J. Date" ∨ t.name = "H. Darwen" Book(t) ∨ Magazine(t) ∀ t : {author, title, subject} ( ¬ ( Book(t) ∧ t.author = "C. J. Date" ∧ ¬ ( t.subject = "relational model"))) Note that the last formula states that all books that are written by C. J. Date have as their subject the relational model. As usual we omit brackets if this causes no ambiguity about the semantics of the formula. We will assume that the quantifiers quantify over the universe of all tuples over the domain in the schema. This leads to the following formal semantics for formulas given a database db over S and a tuple variable binding val : V -> TD: f1 ∧ f2 is true if and only if f1 is true and f2 is true, f1 ∨ f2 is true if and only if f1 is true or f2 is true or both are true, ¬ f is true if and only if f is not true, ∃ v : H ( f ) is true if and only if there is a tuple t over D such that dom(t) = H and the formula f is true for val[v->t], and ∀ v : H ( f ) is true if and only if for all tuples t over D such that dom(t) = H the formula f is true for val[v->t]. Queries Finally we define what a query expression looks like given a schema S = (D, R, h): { v : H | f(v) } where v is a tuple variable, H a header and f(v) a formula in F[S,type] where type = { (v, H) } and with v as its only free variable. The result of such a query for a given database db over S is the set of all tuples t over D with dom(t) = H such that f is true for db and val = { (v, t) }. Examples of query expressions are: { t : {name} | ∃ s : {name, wage} ( Employee(s) ∧ s.wage = 50.000 ∧ t.name = s.name ) } { t : {supplier, article} | ∃ s : {s#, sname} ( Supplier(s) ∧ s.sname = t.supplier ∧ ∃ p : {p#, pname} ( Product(p) ∧ p.pname = t.article ∧ ∃ a : {s#, p#} ( Supplies(a) ∧ s.s# = a.s# ∧ a.p# = p.p# ))) } Semantic and syntactic restriction Domain-independent queries Because the semantics of the quantifiers is such that they quantify over all the tuples over the domain in the schema it can be that a query may return a different result for a certain database if another schema is presumed. For example, consider the two schemas S1 = ( D1, R, h ) and S2 = ( D2, R, h ) with domains D1 = { 1 }, D2 = { 1, 2 }, relation names R = { "r1" } and headers h = { ("r1", {"a"}) }. Both schemas have a common instance: db = { ( "r1", { ("a", 1) } ) } If we consider the following query expression { t : {a} | t.a = t.a } then its result on db is either { (a : 1) } under S1 or { (a : 1), (a : 2) } under S2. It will also be clear that if we take the domain to be an infinite set, then the result of the query will also be infinite. To solve these problems we will restrict our attention to those queries that are domain independent, i.e., the queries that return the same result for a database under all of its schemas. An interesting property of these queries is that if we assume that the tuple variables range over tuples over the so-called active domain of the database, which is the subset of the domain that occurs in at least one tuple in the database or in the query expression, then the semantics of the query expressions does not change. In fact, in many definitions of the tuple calculus this is how the semantics of the quantifiers is defined, which makes all queries by definition domain independent. Safe queries In order to limit the query expressions such that they express only domain-independent queries a syntactical notion of safe query is usually introduced. To determine whether a query expression is safe we will derive two types of information from a query. The first is whether a variable-column pair t.a is bound to the column of a relation or a constant, and the second is whether two variable-column pairs are directly or indirectly equated (denoted t.v == s.w). For deriving boundedness we introduce the following reasoning rules: in " v.a = w.b " no variable-column pair is bound, in " v.a = k " the variable-column pair v.a is bound, in " r(v) " all pairs v.a are bound for a in type(v), in " f1 ∧ f2 " all pairs are bound that are bound either in f1 or in f2, in " f1 ∨ f2 " all pairs are bound that are bound both in f1 and in f2, in " ¬ f " no pairs are bound, in " ∃ v : H ( f ) " a pair w.a is bound if it is bound in f and w <> v, and in " ∀ v : H ( f ) " a pair w.a is bound if it is bound in f and w <> v. For deriving equatedness we introduce the following reasoning rules (next to the usual reasoning rules for equivalence relations: reflexivity, symmetry and transitivity): in " v.a = w.b " it holds that v.a == w.b, in " v.a = k " no pairs are equated, in " r(v) " no pairs are equated, in " f1 ∧ f2 " it holds that v.a == w.b if it holds either in f1 or in f2, in " f1 ∨ f2 " it holds that v.a == w.b if it holds both in f1 and in f2, in " ¬ f " no pairs are equated, in " ∃ v : H ( f ) " it holds that w.a == x.b if it holds in f and w<>v and x<>v, and in " ∀ v : H ( f ) " it holds that w.a == x.b if it holds in f and w<>v and x<>v. We then say that a query expression { v : H | f(v) } is safe if for every column name a in H we can derive that v.a is equated with a bound pair in f, for every subexpression of f of the form " ∀ w : G ( g ) " we can derive that for every column name a in G we can derive that w.a is equated with a bound pair in g, and for every subexpression of f of the form " ∃ w : G ( g ) " we can derive that for every column name a in G we can derive that w.a is equated with a bound pair in g. The restriction to safe query expressions does not limit the expressiveness since all domain-independent queries that could be expressed can also be expressed by a safe query expression. This can be proven by showing that for a schema S = (D, R, h), a given set K of constants in the query expression, a tuple variable v and a header H we can construct a safe formula for every pair v.a with a in H that states that its value is in the active domain. For example, assume that K={1,2}, R={"r"} and h = { ("r", {"a, "b"}) } then the corresponding safe formula for v.b is: v.b = 1 ∨ v.b = 2 ∨ ∃ w ( r(w) ∧ ( v.b = w.a ∨ v.b = w.b ) ) This formula, then, can be used to rewrite any unsafe query expression to an equivalent safe query expression by adding such a formula for every variable v and column name a in its type where it is used in the expression. Effectively this means that we let all variables range over the active domain, which, as was already explained, does not change the semantics if the expressed query is domain independent. Systems DES – An educational tool for working with Tuple Relational Calculus and other formal languages WinRDBI – An educational tool for working with Tuple Relational Calculus and other formal languages See also Relational algebra Relational calculus Domain relational calculus (DRC) References Edgar F. Codd: A Relational Model of Data for Large Shared Data Banks. Communications of the ACM, 13(6):377–387, 1970. Relational model Logical calculi de:Kalkül (Datenbank)
Tuple relational calculus
[ "Mathematics" ]
3,342
[ "Mathematical logic", "Logical calculi" ]
175,312
https://en.wikipedia.org/wiki/Unsymmetrical%20dimethylhydrazine
Unsymmetrical dimethylhydrazine (abbreviated as UDMH; also known as 1,1-dimethylhydrazine, heptyl or Geptil) is a chemical compound with the formula H2NN(CH3)2 that is primarily used as a rocket propellant. At room temperature, UDMH is a colorless liquid, with a sharp, fishy, ammonia-like smell typical of organic amines. Samples turn yellowish on exposure to air and absorb oxygen and carbon dioxide. It is miscible with water, ethanol, and kerosene. At concentrations between 2.5% and 95% in air, its vapors are flammable. It is not sensitive to shock. Symmetrical dimethylhydrazine (1,2-dimethylhydrazine) also exists, but it is not as useful. UDMH can be oxidized in air to form many different substances, including toxic ones. Synthesis In 1875, UDMH was first prepared by Emil Fischer, who discovered and named the class of hydrazines, by reducing N-Nitrosodimethylamine with zinc in boiling acetic acid. Fischer's student Edward Renouf later studied UDMH more extensively as part of his doctoral dissertation. Other historical lab routes include methylation of hydrazine, reduction of nitrodimethylamine and amination of dimethylamine with aminopersulfuric acid. UDMH is produced industrially by two routes. Based on the Olin Raschig process, one method involves reaction of monochloramine with dimethylamine giving 1,1-dimethylhydrazinium chloride: (CH3)2NH + NH2Cl → (CH3)2NNH2 ⋅ HCl In the presence of suitable catalysts, acetylhydrazine can be N-dimethylated using formaldehyde and hydrogen to give the N,N-dimethyl-N'-acetylhydrazine, which can subsequently be hydrolyzed: CH3C(O)NHNH2 + 2CH2O + 2H2 → CH3C(O)NHN(CH3)2 + 2H2O CH3C(O)NHN(CH3)2 + H2O → CH3COOH + H2NN(CH3)2 Uses UDMH is often used in hypergolic rocket fuels as a bipropellant in combination with the oxidizer nitrogen tetroxide and less frequently with IRFNA (inhibited red fuming nitric acid) or liquid oxygen. UDMH is a derivative of hydrazine and is sometimes referred to as a hydrazine. As a fuel, it is described in specification MIL-PRF-25604 in the United States. UDMH is stable and can be kept loaded in rocket fuel systems for long periods, which makes it appealing for use in many liquid rocket engines, despite its cost. In some applications, such as the OMS in the Space Shuttle or maneuvering engines, monomethylhydrazine is used instead due to its slightly higher specific impulse. In some kerosene-fueled rockets, UDMH functions as a starter fuel to start combustion and warm the rocket engine prior to switching to kerosene. UDMH has higher stability than hydrazine, especially at elevated temperatures, and can be used as its replacement or together in a mixture. UDMH is used in many European, Russian, Indian, and Chinese rocket designs. The Russian SS-11 Sego (aka 8K84) ICBM, SS-19 Stiletto (aka 15A30) ICBM, Proton, Kosmos-3M, R-29RMU2 Layner, R-36M, Rokot (based on 15A30) and the Chinese Long March 2 are the most notable users of UDMH (which is referred to as "heptyl" (codename from Soviet era) by Russian engineers). The Titan, GSLV, and Delta rocket families use a mixture of 50% hydrazine and 50% UDMH, called Aerozine 50, in different stages. There is speculation that it is the fuel used in the ballistic missiles that North Korea has developed and tested in 2017. Safety Hydrazine and its methyl derivatives are toxic but LD50 values have not been reported. It is a precursor to dimethylnitrosamine, which is carcinogenic. According to scientific data, usage of UDMH in rockets at Baikonur Cosmodrome has had adverse effects on the environment. One such instance was the Nedelin catastrophe in 1960 when UDMH and dinitrogen tetroxide leaked from a rocket after an explosion and killed a number of bystanders through burn injuries and its toxicity. See also Aerozine 50 C-Stoff Devil's venom UH 25 References Further reading External links Encyclopedia Astronautica CDC – NIOSH Pocket Guide to Chemical Hazards Hydrazines Rocket fuels Methyl compounds
Unsymmetrical dimethylhydrazine
[ "Chemistry" ]
1,043
[ "Functional groups", "Hydrazines" ]
175,417
https://en.wikipedia.org/wiki/Dynamic%20recompilation
In computer science, dynamic recompilation is a feature of some emulators and virtual machines, where the system may recompile some part of a program during execution. By compiling during execution, the system can tailor the generated code to reflect the program's run-time environment, and potentially produce more efficient code by exploiting information that is not available to a traditional static compiler. Uses Most dynamic recompilers are used to convert machine code between architectures at runtime. This is a task often needed in the emulation of legacy gaming platforms. In other cases, a system may employ dynamic recompilation as part of an adaptive optimization strategy to execute a portable program representation such as Java or .NET Common Language Runtime bytecodes. Full-speed debuggers also utilize dynamic recompilation to reduce the space overhead incurred in most deoptimization techniques, and other features such as dynamic thread migration. Tasks The main tasks a dynamic recompiler has to perform are: Reading in machine code from the source platform Emitting machine code for the target platform A dynamic recompiler may also perform some auxiliary tasks: Managing a cache of recompiled code Updating of elapsed cycle counts on platforms with cycle count registers Management of interrupt checking Providing an interface to virtualized support hardware, for example a GPU Optimizing higher-level code structures to run efficiently on the target hardware (see below) Applications Many Java virtual machines feature dynamic recompilation. Apple's Rosetta for Mac OS X on x86, allows PowerPC code to be run on the x86 architecture. Later versions of the Mac 68K emulator used in classic Mac OS to run 680x0 code on the PowerPC hardware. Psyco, a specializing compiler for Python. The HP Dynamo project, an example of a transparent binary dynamic optimizer. DynamoRIO, an open-source successor to Dynamo that works with the ARM, x86-64 and IA-64 (Itanium) instruction sets. The Vx32 virtual machine employs dynamic recompilation to create OS-independent x86 architecture sandboxes for safe application plugins. Microsoft Virtual PC for Mac, used to run x86 code on PowerPC. FreeKEYB, an international DOS keyboard and console driver with many usability enhancements utilized self-modifying code and dynamic dead code elimination to minimize its in-memory image based on its user configuration (selected features, languages, layouts) and actual runtime environment (OS variant and version, loaded drivers, underlying hardware), automatically resolving dependencies, dynamically relocating and recombining code sections on byte-level granularity and optimizing opstrings based on semantic information provided in the source code, relocation information generated by special tools during assembly and profile information obtained at load time. The backwards compatibility functionality of the Xbox 360 (i.e. running games written for the original Xbox) is widely assumed to use dynamic recompilation. Apple's Rosetta 2 for Apple silicon, permits many applications compiled for x86-64-based processors to be translated for execution on Apple silicon. QEMU Emulators PCSX2, a PlayStation 2 emulator, has a recompiler called "microVU", the successor of "SuperVU". GCemu, a GameCube emulator. GEM, a Game Boy emulator for MSX uses an optimizing dynamic recompiler. DeSmuME, a Nintendo DS emulator, has a dynarec option. Soywiz's Psp, a PlayStation Portable emulator, has a dynarec option. Mupen64Plus, a multi-platform Nintendo 64 emulator. Yabause, a multi-platform Saturn emulator. PPSSPP, a multi-platform PlayStation Portable emulator, uses a JIT dynamic recompiler by default. PCem, a emulator for old pc platforms which can be used on Windows and Linux. It uses the recompiler to translate legacy cpu calls to modern cpu instructions and to gain some speed in emulation overall. 86Box, a fork of PCem with the goal of a more accurate emulation. It is using the recompiler for the same purpose. See also Binary recompiler Binary translation Comparison of platform virtualization software Just-in-time compilation Instrumentation (computer programming) References External links Dynamic recompiler tutorial. Archive at the Wayback Machine (archived ). Blog posts about writing a MIPS to PPC dynamic recompiler. Virtualization Compiler construction Emulation software
Dynamic recompilation
[ "Technology", "Engineering" ]
948
[ "Emulation software", "Computer networks engineering", "Virtualization", "History of computing" ]
175,426
https://en.wikipedia.org/wiki/AES3
AES3 is a standard for the exchange of digital audio signals between professional audio devices. An AES3 signal can carry two channels of pulse-code-modulated digital audio over several transmission media including balanced lines, unbalanced lines, and optical fiber. AES3 was jointly developed by the Audio Engineering Society (AES) and the European Broadcasting Union (EBU) and so is also known as AES/EBU. The standard was first published in 1985 and was revised in 1992 and 2003. AES3 has been incorporated into the International Electrotechnical Commission's standard IEC 60958, and is available in a consumer-grade variant known as S/PDIF. History and development The development of standards for digital audio interconnect for both professional and domestic audio equipment, began in the late 1970s in a joint effort between the Audio Engineering Society and the European Broadcasting Union, and culminated in the publishing of AES3 in 1985. The AES3 standard has been revised in 1992 and 2003 and is published in AES and EBU versions. Early on, the standard was frequently known as AES/EBU. Variants using different physical connections are specified in IEC 60958. These are essentially consumer versions of AES3 for use within the domestic high fidelity environment using connectors more commonly found in the consumer market. These variants are commonly known as S/PDIF. Related standards and documents IEC 60958 IEC 60958 (formerly IEC 958) is the International Electrotechnical Commission's standard on digital audio interfaces. It reproduces the AES3 professional digital audio interconnect standard and the consumer version of the same, S/PDIF. The standard consists of several parts: IEC 60958-1: General IEC 60958-2: Software Information Delivery Mode IEC 60958-3: Consumer applications IEC 60958-4: Professional applications IEC 60958-5: Consumer application enhancement AES-2id AES-2id is an AES information document published by the Audio Engineering Society for digital audio engineering—Guidelines for the use of the AES3 interface. This document provides guidelines for the use of AES3, AES Recommended Practice for Digital Audio Engineering, Serial transmission format for two-channel linearly represented digital audio data. This document also covers the description of related standards used in conjunction with AES3 such as AES11. The full details of AES-2id can be studied in the standards section of the Audio Engineering Society web site by downloading copies of the AES-2id document as a PDF file. Hardware connections The AES3 standard parallels part 4 of the international standard IEC 60958. Of the physical interconnection types defined by IEC 60958, two are in common use. IEC 60958 type I Type I connections use balanced, three-conductor, 110-ohm twisted pair cabling with XLR connectors. Type I connections are most often used in professional installations and are considered the standard connector for AES3. The hardware interface is usually implemented using RS-422 line drivers and receivers. IEC 60958 type II IEC 60958 Type II defines an unbalanced electrical or optical interface for consumer electronics applications. The precursor of the IEC 60958 Type II specification was the Sony/Philips Digital Interface, or S/PDIF. Both were based on the original AES/EBU work. S/PDIF and AES3 are interchangeable at the protocol level, but at the physical level, they specify different electrical signalling levels and impedances, which may be significant in some applications. BNC connector AES/EBU signals can also be run using unbalanced BNC connectors a with a 75-ohm coaxial cable. The unbalanced version has a very long transmission distance as opposed to the 150 meters maximum for the balanced version. The AES-3id standard defines a 75-ohm BNC electrical variant of AES3. This uses the same cabling, patching and infrastructure as analogue or digital video, and is thus common in the broadcast industry. Protocol The low-level protocol for data transmission in AES3 and S/PDIF is largely identical, and the following discussion applies for S/PDIF, except as noted. AES3 was designed primarily to support stereo PCM encoded audio in either DAT format at 48 kHz or CD format at 44.1 kHz. No attempt was made to use a carrier able to support both rates; instead, AES3 allows the data to be run at any rate, and encoding the clock and the data together using biphase mark code (BMC). Each bit occupies one time slot. Each audio sample (of up to 24 bits) is combined with four flag bits and a synchronisation preamble which is four time slots long to make a subframe of 32 time slots. The 32 time slots of each subframe are assigned as follows: Two subframes (A and B, normally used for left and right audio channels) make a frame. Frames contain 64 bit periods and are produced once per audio sample period. At the highest level, each 192 consecutive frames are grouped into an audio block. While samples repeat each frame time, metadata is only transmitted once per audio block. At 48 kHz sample rate, there are 250 audio blocks per second, and 3,072,000 time slots per second supported by a 6.144 MHz biphase clock. Synchronisation preamble The synchronisation preamble is a specially coded preamble that identifies the subframe and its position within the audio block. Preambles are not normal BMC-encoded data bits, although they do still have zero DC bias. Three preambles are possible: X (or M) : 11100010 if previous time slot was 0, 00011101 if it was 1. (Equivalently, 10010011 NRZI encoded.) Marks a word for channel A (left), other than at the start of an audio block. Y (or W) : 11100100 if previous time slot was 0, 00011011 if it was 1. (Equivalently, 10010110 NRZI encoded.) Marks a word for channel B (right). Z (or B) : 11101000 if previous time slot was 0, 00010111 if it was 1. (Equivalently, 10011100 NRZI encoded.) Marks a word for channel A (left) at the start of an audio block. The three preambles are called X, Y, Z in the AES3 standard; and M, W, B in IEC 958 (an AES extension). The 8-bit preambles are transmitted in the time allocated to the first four time slots of each subframe (time slots 0 to 3). Any of the three marks the beginning of a subframe. X or Z marks the beginning of a frame, and Z marks the beginning of an audio block. | 0 | 1 | 2 | 3 | | 0 | 1 | 2 | 3 | Time slots _ _ _ _ / \_/ \_/ \_/ \_/ \ Preamble X _ _ ___ ___ / \___/ \___/ \_/ \_/ \ Preamble Y _ _ _ _ / \_/ \_/ \_/ \_/ \ Preamble Z ___ ___ ___ ___ / \___/ \___/ \___/ \___/ \ All 0 bits BMC encoded _ _ _ _ _ _ _ _ / \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ All 1 bits BMC encoded | 0 | 1 | 2 | 3 | | 0 | 1 | 2 | 3 | Time slots In two-channel AES3, the preambles form a pattern of ZYXYXYXY..., but it is straightforward to extend this structure to additional channels (more subframes per frame), each with a Y preamble, as is done in the MADI protocol. Channel status word There is one channel status bit in each subframe, a total of 192 bits or 24 bytes for each channel in each block. Between the AES3 and S/PDIF standards, the contents of the 192-bit channel status word differ significantly, although they agree that the first channel status bit distinguishes between the two. In the case of AES3, the standard describes, in detail, the function of each bit. Byte 0: Basic control data: sample rate, compression, emphasis bit 0: A value of 1 indicates this is AES3 channel status data. 0 indicates this is S/PDIF data. bit 1: A value of 0 indicates this is linear audio PCM data. A value of 1 indicates other (usually non-audio) data. bits 2–4: Indicates the type of signal preemphasis applied to the data. Generally set to 100 (none). bit 5: A value of 0 indicates that the source is locked to some (unspecified) external time sync. A value of 1 indicates an unlocked source. bits 6–7: Sample rate. These bits are redundant when real-time audio is transmitted (the receiver can observe the sample rate directly), but are useful if AES3 data is recorded or otherwise stored. Options are unspecified, 48 kHz (the default), 44.1 kHz, and 32 kHz. Additional sample rate options may be indicated in the extended sample rate field (see below). Byte 1: indicates if the audio stream is stereo, mono or some other combination. bits 0–3: Indicates the relationship of the two channels; they might be unrelated audio data, a stereo pair, duplicated mono data, music and voice commentary, a stereo sum/difference code. bits 4–7: Used to indicate the format of the user channel word Byte 2: Audio word length bits 0–2: Aux bits usage. This indicates how the aux bits (time slots 4–7) are used. Generally set to 000 (unused) or 001 (used for 24-bit audio data). bits 3–5: Word length. Specifies the sample size, relative to the 20- or 24-bit maximum. Can specify 0, 1, 2 or 4 missing bits. Unused bits are filled with 0, but audio processing functions such as mixing will generally fill them in with valid data without changing the effective word length. bits 6–7: Unused Byte 3: Used only for multichannel applications Byte 4: Additional sample rate information bits 0–1: Indicates the grade of the sample rate reference, per AES11 bit 2: Reserved bits 3–6: Extended sample rate. This indicates other sample rates, not representable in byte 0 bits 6–7. Values are assigned for 24, 96, and 192 kHz, as well as 22.05, 88.2, and 176.4 kHz. bit 7: Sampling frequency scaling flag. If set, indicates that the sample rate is multiplied by 1/1.001 to match NTSC video frame rates. Byte 5: Reserved Bytes 6–9: Four ASCII characters for indicating channel origin. Widely used in large studios. Bytes 10–13: Four ASCII characters indicating channel destination, to control automatic switchers. Less often used. Bytes 14–17: 32-bit sample address, incrementing block-to-block by 192 (because there are 192 frames per block). At 48 kHz, this wraps approximately every day. Bytes 18–21: 32-bit sample address offset to indicate samples since midnight. Byte 22: Channel status word reliability indication bits 0–3: Reserved bit 4: If set, bytes 0–5 (signal format) are unreliable. bit 5: If set, bytes 6–13 (channel labels) are unreliable. bit 6: If set, bytes 14–17 (sample address) are unreliable. bit 7: If set, bytes 18–21 (timestamp) are unreliable. Byte 23: CRC. This byte is used to detect corruption of the channel status word, as might be caused by switching mid-block. Embedded timecode SMPTE timecode data can be embedded within AES3 signals. It can be used for synchronization and for logging and identifying audio content. It is embedded as a 32-bit binary word in bytes 18 to 21 of the channel status data. The AES11 standard provides information on the synchronization of digital audio structures. the AES52 standard describes how to insert unique identifiers into an AES3 bit stream. SMPTE 2110 SMPTE 2110-31 defines how to encapsulate an AES3 data stream in Real-time Transport Protocol packets for transmission over an IP network using the SMPTE 2110 IP based multicast framework. SMPTE 302M SMPTE 302M-2007 defines how to encapsulate an AES3 data stream in an MPEG transport stream for television applications. Other formats AES3 digital audio format can also be carried over an Asynchronous Transfer Mode network. The standard for packing AES3 frames into ATM cells is AES47. See also ADAT LightpipeMultichannel optical digital audio interface Notes References Further reading External links Download page for AES3 standard European Broadcasting Union, Specification of the Digital Audio Interface (The AES/EBU interface) Tech 3250-E third edition (2004) IEC - Historical Collection, IHS Audio communications protocols Digital audio Sound Broadcast engineering Wikipedia articles with ASCII art IEC 60958 Audio Engineering Society standards European Broadcasting Union
AES3
[ "Engineering" ]
2,859
[ "Audio Engineering Society standards", "Broadcast engineering", "Audio engineering", "Electronic engineering" ]
175,437
https://en.wikipedia.org/wiki/Gossypol
Gossypol () is a natural phenol derived from the cotton plant (genus Gossypium). Gossypol is a phenolic aldehyde that permeates cells and acts as an inhibitor for several dehydrogenase enzymes. It is a yellow pigment. The structure exhibits atropisomerism, with the two enantiomers having different biochemical properties. Among other applications, it has been tested as a male oral contraceptive in China. In addition to its putative contraceptive properties, gossypol has also long been known to possess antimalarial properties. History Utilization of cotton-seed oil in the 19th century was complicated by the fact that it stained everything. In 1882-1883 James Longmore from Liverpool took several patents on separating the colorant by partial saponification of the oil, and in 1886 he presented his findings to the local section of the Society of Chemical Industry. He is often considered the discoverer of gossypol, even though he only isolated it in crude form. The name was coined in 1899 by Leon Marchlewski, who first purified the compound and studied some of its chemical properties. F. E. Withers and F. E. Caruth first attributed the toxic properties of the cotton seed (known since the 19th c.) to gossypol in 1915, and its chemical formula was established in 1927 by Earl Perry Clark (1892-1943). Biosynthesis Gossypol is a terpenoid aldehyde which is formed metabolically through acetate via the isoprenoid pathway. The sesquiterpene dimer undergoes a radical coupling reaction to form gossypol. The biosynthesis begins when geranyl pyrophosphate (GPP) and isopentenyl pyrophosphate (IPP) are combined to make the sesquiterpene precursor farnesyl diphosphate (FPP). The cadinyl cation (1) is oxidized to 2 by (+)-δ-cadinene synthase. The (+)-δ-cadinene (2) is involved in making the basic aromatic sesquiterpene unit, homigossypol, by oxidation, which generates the 3 (8-hydroxy-δ-cadinene) with the help of (+)-δ-cadinene 8-hyroxylase. Compound 3 goes through various oxidative processes to make 4 (deoxyhemigossypol), which is oxidized by one electron into hemigossypol (5, 6, 7) and then undergoes a phenolic oxidative coupling, ortho to the phenol groups, to form gossypol (8). The coupling is catalyzed by a hydrogen peroxide-dependent peroxidase enzyme, which results in the final product. Research Contraception A 1929 investigation in Jiangxi showed correlation between low fertility in males and use of crude cottonseed oil for cooking. The compound causing the contraceptive effect was determined to be gossypol. In the 1970s, the Chinese government began researching the use of gossypol as a contraceptive. Their studies involved over 10,000 subjects, and continued for over a decade. They concluded that gossypol provided reliable contraception, could be taken orally as a tablet, and did not upset men's balance of hormones. However, gossypol also had serious flaws. The studies also discovered an abnormally high rate (0.75%) of hypokalemia (low blood potassium levels) among subjects. Hypokalemia causes symptoms of fatigue, muscle weakness, and at its most extreme, paralysis. In addition, about 7% of subjects reported effects on their digestive systems, about 12% had increased fatigue, some subjects experienced impotence or reduced libido, and 9.9% became irreversibly infertile, apparently associated with longer treatment and greater total dose of gossypol. Most subjects recovered after stopping treatment and taking potassium supplements. The same study showed taking potassium supplements during gossypol treatment did not prevent hypokalemia in primates. The potassium deficiency may also be a result of the Chinese diet or genetic predisposition. In the mid-1990s, the Brazilian pharmaceutical company Hebron announced plans to market a low-dose gossypol pill called Nofertil, but the pill never came to market. Its release was indefinitely postponed due to unacceptably high rates of permanent infertility. 5% to 25% of the men remained azoospermic up to a year after stopping treatment. Researchers have suggested gossypol might make a good noninvasive alternative to surgical vasectomy. In 1986, in conjunction with the Chinese Ministry of Public Health and the Rockefeller Foundation, the World Health Organization formalized a decision to discontinue research into gossypol as a male contraceptive drug. In addition to the other side effects, the WHO researchers were concerned about gossypol's toxicity: the in primates is less than 10 times the contraceptive dose, creating a small therapeutic window. This report effectively ended further studies of gossypol as a temporary contraceptive, but research into using it as an alternative to vasectomy continues in Austria, Brazil, Chile, China, the Dominican Republic, and Nigeria. Toxicity Food and animal agricultural industries must manage cotton-derivative product levels to avoid toxicity. For example, only ruminant microflora can digest gossypol, and then only to a certain level, and cottonseed oil must be refined. Genetically engineered cotton plants that contain little gossypol in the seed may still contain the compound in the stems and leaves. References External links Contraception for males Plant toxins Suspected embryotoxins Dimers (chemistry) Aromatic aldehydes Naphthols Catechols Isopropyl compounds Cotton 3-Hydroxypropenals Polyols Aldehyde dehydrogenase inhibitors
Gossypol
[ "Chemistry", "Materials_science" ]
1,267
[ "Polymer chemistry", "Chemical ecology", "Dimers (chemistry)", "Plant toxins" ]
175,440
https://en.wikipedia.org/wiki/Medical%20cannabis
Medical cannabis, medicinal cannabis or medical marijuana (MMJ) refers to cannabis products and cannabinoid molecules that are prescribed by physicians for their patients. The use of cannabis as medicine has a long history, but has not been as rigorously tested as other medicinal plants due to legal and governmental restrictions, resulting in limited clinical research to define the safety and efficacy of using cannabis to treat diseases. Preliminary evidence has indicated that cannabis might reduce nausea and vomiting during chemotherapy and reduce chronic pain and muscle spasms. Regarding non-inhaled cannabis or cannabinoids, a 2021 review found that it provided little relief against chronic pain and sleep disturbance, and caused several transient adverse effects, such as cognitive impairment, nausea, and drowsiness. Short-term use increases the risk of minor and major adverse effects. Common side effects include dizziness, feeling tired, vomiting, and hallucinations. Long-term effects of cannabis are not clear. Concerns include memory and cognition problems, risk of addiction, schizophrenia in young people, and the risk of children taking it by accident. Many cultures have used cannabis for therapeutic purposes for thousands of years. Some American medical organizations have requested removal of cannabis from the list of Schedule I controlled substances, emphasizing that rescheduling would enable more extensive research and regulatory oversight to ensure safe access. Others oppose its legalization, such as the American Academy of Pediatrics. Medical cannabis can be administered through various methods, including capsules, lozenges, tinctures, dermal patches, oral or dermal sprays, cannabis edibles, and vaporizing or smoking dried buds. Synthetic cannabinoids are available for prescription use in some countries, such as synthetic delta-9-THC and nabilone. Countries that allow the medical use of whole-plant cannabis include Argentina, Australia, Canada, Chile, Colombia, Germany, Greece, Israel, Italy, the Netherlands, Peru, Poland, Portugal, Spain, and Uruguay. In the United States, 38 states and the District of Columbia have legalized cannabis for medical purposes, beginning with the passage of California's Proposition 215 in 1996. Although cannabis remains prohibited for any use at the federal level, the Rohrabacher–Farr amendment was enacted in December 2014, limiting the ability of federal law to be enforced in states where medical cannabis has been legalized. This amendment reflects an increasing bipartisan acknowledgment of the potential therapeutic uses of cannabis and the significance of state-level policymaking in this area. Classification In the U.S., the National Institute on Drug Abuse defines medical cannabis as "using the whole, unprocessed marijuana plant or its basic extracts to treat symptoms of illness and other conditions". A cannabis plant includes more than 400 different chemicals, of which about 70 are cannabinoids. In comparison, typical government-approved medications contain only one or two chemicals. The number of active chemicals in cannabis is one reason why treatment with cannabis is difficult to classify and study. A 2014 review stated that the variations in ratio of CBD-to-THC in botanical and pharmaceutical preparations determines the therapeutic vs psychoactive effects (CBD attenuates THC's psychoactive effects) of cannabis products. Medical uses Overall, research into the health effects of medical cannabis has been of low quality and it is not clear whether it is a useful treatment for any condition, or whether harms outweigh any benefit. There is no consistent evidence that it helps with chronic pain and muscle spasms. Low quality evidence suggests its use for reducing nausea during chemotherapy, improving appetite in HIV/AIDS, improving sleep, and improving tics in Tourette syndrome. When usual treatments are ineffective, cannabinoids have also been recommended for anorexia, arthritis, glaucoma, and migraine. It is unclear whether American states might be able to mitigate the adverse effects of the opioid epidemic by prescribing medical cannabis as an alternative pain management drug. Cannabis should not be used in pregnancy. Insomnia Research analyzing data from the National Health and Nutrition Examination Survey (NHANES) did not find significant differences in sleep duration between cannabis users and non-users. This suggests that while some individuals may perceive benefits from cannabis use in terms of sleep, it may not significantly change overall sleep patterns across the general population. A review of literature up to 2018 indicates that cannabidiol (CBD) may have therapeutic potential for the treatment of insomnia. CBD, a non-psychoactive component of cannabis, is of particular interest due to its potential to influence sleep without the psychoactive effects associated with tetrahydrocannabinol (THC). Nausea and vomiting Medical cannabis is somewhat effective in chemotherapy-induced nausea and vomiting (CINV) and may be a reasonable option in those who do not improve following preferential treatment. Comparative studies have found cannabinoids to be more effective than some conventional antiemetics such as prochlorperazine, promethazine, and metoclopramide in controlling CINV, but these are used less frequently because of side effects including dizziness, dysphoria, and hallucinations. Long-term cannabis use may cause nausea and vomiting, a condition known as cannabinoid hyperemesis syndrome (CHS). A 2016 Cochrane review said that cannabinoids were "probably effective" in treating chemotherapy-induced nausea in children, but with a high side-effect profile (mainly drowsiness, dizziness, altered moods, and increased appetite). Less common side effects were "ocular problems, orthostatic hypotension, muscle twitching, pruritus, vagueness, hallucinations, lightheadedness and dry mouth". HIV/AIDS Evidence is lacking for both efficacy and safety of cannabis and cannabinoids in treating patients with HIV/AIDS or for anorexia associated with AIDS. As of 2013, current studies suffer from the effects of bias, small sample size, and lack of long-term data. Pain A 2021 review found little effect of using non-inhaled cannabis to relieve chronic pain. According to a 2019 systematic review, there have been inconsistent results of using cannabis for neuropathic pain, spasms associated with multiple sclerosis and pain from rheumatic disorders, but was not effective treating chronic cancer pain. The authors state that additional randomized controlled trials of different cannabis products are necessary to make conclusive recommendations. When cannabis is inhaled to relieve pain, blood levels of cannabinoids rise faster than when oral products are used, peaking within three minutes and attaining an analgesic effect in seven minutes. A 2011 review considered cannabis to be generally safe, and it appears safer than opioids in palliative care. A 2022 review concluded the pain relief experienced after using medical cannabis is due to the placebo effect, especially given widespread media attention that sets the expectation for pain relief. Neurological conditions Cannabis' efficacy is not clear in treating neurological problems, including multiple sclerosis (MS) and movement problems. Evidence also suggests that oral cannabis extract is effective for reducing patient-centered measures of spasticity. A trial of cannabis is deemed to be a reasonable option if other treatments have not been effective. Its use for MS is approved in ten countries. A 2012 review found no problems with tolerance, abuse, or addiction. In the United States, cannabidiol, one of the cannabinoids found in the marijuana plant, has been approved for treating two severe forms of epilepsy, Lennox-Gastaut syndrome and Dravet syndrome. Mental health A 2019 systematic review found that there is a lack of evidence that cannabinoids are effective in treating depressive or anxiety disorders, attention-deficit hyperactivity disorder (ADHD), Tourette syndrome, post-traumatic stress disorder, or psychosis. Research indicates that cannabis, particularly CBD, may have anxiolytic (anxiety-reducing) effects. A study found that CBD significantly reduced anxiety during a simulated public speaking test for individuals with social anxiety disorder. However, the relationship between cannabis use and anxiety symptoms is complex, and while some users report relief, the overall evidence from observational studies and clinical trials remains inconclusive. Cannabis is often used by people to cope with anxiety, yet the efficacy and safety of cannabis for treating anxiety disorders is yet to be researched. Cannabis use, especially at high doses, is associated with a higher risk of psychosis, particularly in individuals with a genetic predisposition to psychotic disorders like schizophrenia. Some studies have shown that cannabis can trigger a temporary psychotic episode, which may increase the risk of developing a psychotic disorder later. The impact of cannabis on depression is less clear. Some studies suggest a potential increase in depression risk among adolescents who use cannabis, though findings are inconsistent across studies. Adverse effects Medical use There is insufficient data to draw strong conclusions about the safety of medical cannabis. Typically, adverse effects of medical cannabis use are not serious; they include tiredness, dizziness, increased appetite, and cardiovascular and psychoactive effects. Other effects can include impaired short-term memory; impaired motor coordination; altered judgment; and paranoia or psychosis at high doses. Tolerance to these effects develops over a period of days or weeks. The amount of cannabis normally used for medicinal purposes is not believed to cause any permanent cognitive impairment in adults, though long-term treatment in adolescents should be weighed carefully as they are more susceptible to these impairments. Withdrawal symptoms are rarely a problem with controlled medical administration of cannabinoids. The ability to drive vehicles or to operate machinery may be impaired until a tolerance is developed. Although supporters of medical cannabis say that it is safe, further research is required to assess the long-term safety of its use. Cognitive effects Recreational use of cannabis is associated with cognitive deficits, especially for those who begin to use cannabis in adolescence. there is a lack of research into long-term cognitive effects of medical use of cannabis, but one 12-month observational study reported that "MC patients demonstrated significant improvements on measures of executive function and clinical state over the course of 12 months". Impact on psychosis Exposure to THC can cause acute transient psychotic symptoms in healthy individuals and people with schizophrenia. A 2007 meta analysis concluded that cannabis use reduced the average age of onset of psychosis by 2.7 years relative to non-cannabis use. A 2005 meta analysis concluded that adolescent use of cannabis increases the risk of psychosis, and that the risk is dose-related. A 2004 literature review on the subject concluded that cannabis use is associated with a two-fold increase in the risk of psychosis, but that cannabis use is "neither necessary nor sufficient" to cause psychosis. A French review from 2009 came to a conclusion that cannabis use, particularly that before age 15, was a factor in the development of schizophrenic disorders. Pharmacology The genus Cannabis contains two species which produce useful amounts of psychoactive cannabinoids: Cannabis indica and Cannabis sativa, which are listed as Schedule I medicinal plants in the US; a third species, Cannabis ruderalis, has few psychogenic properties. Cannabis contains more than 460 compounds; at least 80 of these are cannabinoids – chemical compounds that interact with cannabinoid receptors in the brain. As of 2012, more than 20 cannabinoids were being studied by the U.S. FDA. The most psychoactive cannabinoid found in the cannabis plant is tetrahydrocannabinol (or delta-9-tetrahydrocannabinol, commonly known as THC). Other cannabinoids include delta-8-tetrahydrocannabinol, cannabidiol (CBD), cannabinol (CBN), cannabicyclol (CBL), cannabichromene (CBC) and cannabigerol (CBG); they have less psychotropic effects than THC, but may play a role in the overall effect of cannabis. The most studied are THC, CBD and CBN. CB1 and CB2 are the primary cannabinoid receptors responsible for several of the effects of cannabinoids, although other receptors may play a role as well. Both belong to a group of receptors called G protein-coupled receptors (GPCRs). CB1 receptors are found in very high levels in the brain and are thought to be responsible for psychoactive effects. CB2 receptors are found peripherally throughout the body and are thought to modulate pain and inflammation. Absorption Cannabinoid absorption is dependent on its route of administration. Inhaled and vaporized THC have similar absorption profiles to smoked THC, with a bioavailability ranging from 10 to 35%. Oral administration has the lowest bioavailability of approximately 6%, variable absorption depending on the vehicle used, and the longest time to peak plasma levels (2 to 6 hours) compared to smoked or vaporized THC. Similar to THC, CBD has poor oral bioavailability, approximately 6%. The low bioavailability is largely attributed to significant first-pass metabolism in the liver and erratic absorption from the gastrointestinal tract. However, oral administration of CBD has a faster time to peak concentrations (2 hours) than THC. Due to the poor bioavailability of oral preparations, alternative routes of administration have been studied, including sublingual and rectal. These alternative formulations maximize bioavailability and reduce first-pass metabolism. Sublingual administration in rabbits yielded bioavailability of 16% and time to peak concentration of 4 hours. Rectal administration in monkeys doubled bioavailability to 13.5% and achieved peak blood concentrations within 1 to 8 hours after administration. Distribution Like cannabinoid absorption, distribution is also dependent on route of administration. Smoking and inhalation of vaporized cannabis have better absorption than do other routes of administration, and therefore also have more predictable distribution. THC is highly protein bound once absorbed, with only 3% found unbound in the plasma. It distributes rapidly to highly vascularized organs such as the heart, lungs, liver, spleen, and kidneys, as well as to various glands. Low levels can be detected in the brain, testes, and unborn fetuses, all of which are protected from systemic circulation via barriers. THC further distributes into fatty tissues a few days after administration due to its high lipophilicity, and is found deposited in the spleen and fat after redistribution. Metabolism Delta-9-THC is the primary molecule responsible for the effects of cannabis. Delta-9-THC is metabolized in the liver and turns into 11-OH-THC. 11-OH-THC is the first metabolic product in this pathway. Both Delta-9-THC and 11-OH-THC are psychoactive. The metabolism of THC into 11-OH-THC plays a part in the heightened psychoactive effects of edible cannabis. Next, 11-OH-THC is metabolized in the liver into 11-COOH-THC, which is the second metabolic product of THC. 11-COOH-THC is not psychoactive. Ingestion of edible cannabis products lead to a slower onset of effect than the inhalation of it because the THC travels to the liver first through the blood before it travels to the rest of the body. Inhaled cannabis can result in THC going directly to the brain, where it then travels from the brain back to the liver in recirculation for metabolism. Eventually, both routes of metabolism result in the metabolism of psychoactive THC to inactive 11-COOH-THC. Excretion Due to substantial metabolism of THC and CBD, their metabolites are excreted mostly via feces, rather than by urine. After delta-9-THC is hydroxylated into 11-OH-THC via CYP2C9, CYP2C19, and CYP3A4, it undergoes phase II metabolism into more than 30 metabolites, a majority of which are products of glucuronidation. Approximately 65% of THC is excreted in feces and 25% in the urine, while the remaining 10% is excreted by other means. The terminal half-life of THC is 25 to 36 hours, whereas for CBD it is 18 to 32 hours. CBD is hydroxylated by P450 liver enzymes into 7-OH-CBD. Its metabolites are products of primarily CYP2C19 and CYP3A4 activity, with potential activity of CYP1A1, CYP1A2, CYP2C9, and CYP2D6. Similar to delta-9-THC, a majority of CBD is excreted in feces and some in the urine. The terminal half-life is approximately 18–32 hours. Administration Smoking has been the means of administration of cannabis for many users, but it is not suitable for the use of cannabis as a medicine. It was the most common method of medical cannabis consumption in the US . It is difficult to predict the pharmacological response to cannabis because concentration of cannabinoids varies widely, as there are different ways of preparing it for consumption (smoked, applied as oils, eaten, infused into other foods, or drunk) and a lack of production controls. The potential for adverse effects from smoke inhalation makes smoking a less viable option than oral preparations. Cannabis vaporizers have gained popularity because of a perception among users that fewer harmful chemicals are ingested when components are inhaled via aerosol rather than smoke. Cannabinoid medicines are available in pill form (dronabinol and nabilone) and liquid extracts formulated into an oromucosal spray (nabiximols). Oral preparations are "problematic due to the uptake of cannabinoids into fatty tissue, from which they are released slowly, and the significant first-pass liver metabolism, which breaks down Δ9THC and contributes further to the variability of plasma concentrations". The US Food and Drug Administration (FDA) has not approved smoked cannabis for any condition or disease, as it deems that evidence is lacking concerning safety and efficacy. The FDA issued a 2006 advisory against smoked medical cannabis stating: "marijuana has a high potential for abuse, has no currently accepted medical use in treatment in the United States, and has a lack of accepted safety for use under medical supervision." History Ancient Cannabis, called má 麻 (meaning "hemp; cannabis; numbness") or dàmá 大麻 (with "big; great") in Chinese, was used in Taiwan for fiber starting about 10,000 years ago. The botanist Hui-lin Li wrote that in China, "The use of Cannabis in medicine was probably a very early development. Since ancient humans used hemp seed as food, it was quite natural for them to also discover the medicinal properties of the plant." Emperor Shen-Nung, who was also a pharmacologist, wrote a book on treatment methods in 2737 BCE that included the medical benefits of cannabis. He recommended the substance for many ailments, including constipation, gout, rheumatism, and absent-mindedness. Cannabis is one of the 50 "fundamental" herbs in traditional Chinese medicine. The Ebers Papyrus () from Ancient Egypt describes medical cannabis. The ancient Egyptians used hemp (cannabis) in suppositories for relieving the pain of hemorrhoids. Surviving texts from ancient India confirm that cannabis' psychoactive properties were recognized, and doctors used it for treating a variety of illnesses and ailments, including insomnia, headaches, gastrointestinal disorders, and pain, including during childbirth. The Ancient Greeks used cannabis to dress wounds and sores on their horses, and in humans, dried leaves of cannabis were used to treat nose bleeds, and cannabis seeds were used to expel tapeworms. In the medieval Islamic world, Arabic physicians made use of the diuretic, antiemetic, antiepileptic, anti-inflammatory, analgesic and antipyretic properties of Cannabis sativa, and used it extensively as medication from the 8th to 18th centuries. Landrace strains Cannabis seeds may have been used for food, rituals or religious practices in ancient Europe and China. Harvesting the plant led to the spread of cannabis throughout Eurasia about 10,000 to 5,000 years ago, with further distribution to the Middle East and Africa about 2,000 to 500 years ago. A landrace strain of cannabis developed over centuries. They are cultivars of the plant that originated in one specific region. Widely cultivated strains of cannabis, such as "Afghani" or "Hindu Kush", are indigenous to the Pakistan and Afghanistan regions, while "Durban Poison" is native to Africa. There are approximately 16 landrace strains of cannabis identified from Pakistan, Jamaica, Africa, Mexico, Central America and Asia. Modern An Irish physician, William Brooke O'Shaughnessy, is credited with introducing cannabis to Western medicine. O'Shaughnessy discovered cannabis in the 1830s while living abroad in India, where he conducted numerous experiments investigating the drug's medical utility (noting in particular its analgesic and anticonvulsant effects). He returned to England with a supply of cannabis in 1842, after which its use spread through Europe and the United States. In 1845 French physician Jacques-Joseph Moreau published a book about the use of cannabis in psychiatry. In 1850 cannabis was entered into the United States Pharmacopeia. An anecdotal report of Cannabis indica as a treatment for tetanus appeared in Scientific American in 1880. The use of cannabis in medicine began to decline by the end of the 19th century, due to difficulty in controlling dosages and the rise in popularity of synthetic and opium-derived drugs. Also, the advent of the hypodermic syringe allowed these drugs to be injected for immediate effect, in contrast to cannabis which is not water-soluble and therefore cannot be injected. In the United States, the medical use of cannabis further declined with the passage of the Marihuana Tax Act of 1937, which imposed new regulations and fees on physicians prescribing cannabis. Cannabis was removed from the U.S. Pharmacopeia in 1941, and officially banned for any use with the passage of the Controlled Substances Act of 1970. Cannabis began to attract renewed interest as medicine in the 1970s and 1980s, in particular due to its use by cancer and AIDS patients who reported relief from the effects of chemotherapy and wasting syndrome. In 1996, California became the first U.S. state to legalize medical cannabis in defiance of federal law. In 2001, Canada became the first country to adopt a system regulating the medical use of cannabis. Society and culture Legal status Countries that have legalized the medical use of cannabis include Argentina, Australia, Brazil, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Finland, Germany, Greece, Israel, Italy, Jamaica, Lebanon, Luxembourg, Malta, Morocco, the Netherlands, New Zealand, North Macedonia, Panama, Peru, Poland, Portugal, Rwanda, Sri Lanka, Switzerland, Thailand, the United Kingdom, and Uruguay. Other countries have more restrictive laws that allow only the use of isolated cannabinoid drugs such as Sativex or Epidiolex. Countries with the most relaxed policies include Canada, the Netherlands, Thailand, and Uruguay, where cannabis can be purchased without need for a prescription. In Mexico, THC content of medical cannabis is limited to one percent. In the United States, the legality of medical cannabis varies by state. However, in many of these countries, access may not always be possible under the same conditions. International law Cannabis and its derivatives are subject to regulation under three United Nations drug control treaties: the 1961 Single Convention on Narcotic Drugs, the 1971 Convention on Psychotropic Substances, and the 1988 Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Cannabis and cannabis resin are classified as a Schedule I drug under the Single Convention treaty, meaning that medical use is considered "indispensible for the relief of pain and suffering" but that it is considered to be an addictive medication with risks of abuse. Countries have an obligation to provide access and sufficient availability of drugs listed in Schedule I for the purposes of medical uses. Prior to December 2020 cannabis and cannabis resin were also included in Schedule IV, a more restrictive level of control, which is for only the most dangerous drugs such as heroin and fentanyl. They were removed after an independent scientific assessment by the World Health Organization in 2018-1029. Member nations of the UN Commission on Narcotic Drugs voted 27–25 to remove it from Schedule IV on 2 December 2020, following a World Health Organization recommendation for removal in January 2019. United States In the United States, the use of cannabis for medical purposes is legal in 38 states, four out of five permanently inhabited U.S. territories, and the District of Columbia. An additional 10 states have more restrictive laws allowing the use of low-THC products. Cannabis remains illegal at the federal level under the Controlled Substances Act, which classifies it as a Schedule I drug with a high potential for abuse and no accepted medical use. In December 2014, however, the Rohrabacher–Farr amendment was signed into law, prohibiting the Justice Department from prosecuting individuals acting in accordance with state medical cannabis laws. In the US, the FDA has approved two oral cannabinoids for use as medicine in 1985: dronabinol (pure delta-9-THC; brand name Marinol) and nabilone (a synthetic neocannabinoid; brand name Cesamet). In the US, they are both listed as Schedule II, indicating high potential for side effects and addiction. Economics Distribution The method of obtaining medical cannabis varies by region and by legislation. In the US, most consumers grow their own or buy it from cannabis dispensaries in states where it is legal. Marijuana vending machines for selling or dispensing cannabis are in use in the United States and are planned to be used in Canada. In 2014, the startup Meadow began offering on-demand delivery of medical marijuana in the San Francisco Bay Area, through their mobile app. Almost 70% of medical cannabis is exported from the United Kingdom, according to a 2017 United Nations report, with much of the remaining amount coming from Canada and the Netherlands. Insurance In the United States, health insurance companies may not pay for a medical marijuana prescription as the Food and Drug Administration must approve any substance for medicinal purposes. Before this can happen, the FDA must first permit the study of the medical benefits and drawbacks of the substance, which it has not done since it was placed on Schedule I of the Controlled Substances Act in 1970. Therefore, all expenses incurred fulfilling a medical marijuana prescription will possibly be incurred as out-of-pocket. However, the New Mexico Court of Appeals has ruled that workers' compensation insurance must pay for prescribed marijuana as part of the state's Medical Cannabis Program. Positions of medical organizations Medical organizations that have issued statements in support of allowing access to medical cannabis include the American Nurses Association, American Public Health Association, American Medical Student Association, National Multiple Sclerosis Society, Epilepsy Foundation, and Leukemia & Lymphoma Society. Organizations that oppose the legalization of medical cannabis include the American Academy of Pediatrics (AAP) and American Psychiatric Association. However, the AAP also supports rescheduling for the purpose of facilitating research. The American Medical Association and American College of Physicians do not take a position on the legalization of medical cannabis, but have called for the Schedule I classification to be reviewed. The American Academy of Family Physicians and American Society of Addiction Medicine also do not take a position, but do support rescheduling to better facilitate research. The American Heart Association says that "many of the concerning health implications of cannabis include cardiovascular diseases" but that it supports rescheduling to allow "more nuanced ... marijuana legislation and regulation" and to "reflect the existing science behind cannabis". The American Cancer Society and American Psychological Association have noted the obstacles that exist for conducting research on cannabis, and have called on the federal government to better enable scientific study of the drug. Cancer Research UK say that while cannabis is being studied for therapeutic potential, "claims that there is solid "proof" that cannabis or cannabinoids can cure cancer is highly misleading to patients and their families, and builds a false picture of the state of progress in this area". Nonproprietary names There are three International Nonproprietary Name (INN) granted for cannabinoids: two plant-derived phytocannabinoids and one neocannabinoid: Dronabinol is the INN for delta-9-THC (there is a common confusion according to which the word "dronabinol" would only refer to synthetic delta-9-THC, which is incorrect). Cannabidiol is also the official INN for the molecule, granted in 2017. Nabilone is the INN for a synthetic cannabinoid analog (not present in Cannabis plants). Nabiximols is the generic name (but not recognized as an INN) of a mixture of Cannabidiol and Dronabinol. Its most common form is the oromucosal spray derived from two strains of Cannabis sativa and containing THC and CBD traded under the brand name Sativex®. It is not approved in the United States, but is approved in several European countries, Canada, and New Zealand as of 2013. As an antiemetic, these medications are usually used when conventional treatment for nausea and vomiting associated with cancer chemotherapy fail to work. Nabiximols is used for treatment of spasticity associated with MS when other therapies have not worked, and when an initial trial demonstrates "meaningful improvement". Trials for FDA approval in the US are underway. It is also approved in several European countries for overactive bladder and vomiting. When sold under the trade name Sativex as a mouth spray, the prescribed daily dose in Sweden delivers a maximum of 32.4 mg of THC and 30 mg of CBD; mild to moderate dizziness is common during the first few weeks. Relative to inhaled consumption, peak concentration of oral THC is delayed, and it may be difficult to determine optimal dosage because of variability in patient absorption. In 1964, Albert Lockhart and Manley West began studying the health effects of traditional cannabis use in Jamaican communities. They developed, and in 1987 gained permission to market, the pharmaceutical "Canasol", one of the first cannabis extracts. Research A 2022 review concluded that "oral, synthetic cannabis products with high THC-to-CBD ratios and sublingual, extracted cannabis products with comparable THC-to-CBD ratios may be associated with short-term improvements in chronic pain and increased risk for dizziness and sedation." See also Charlotte's Web (cannabis) Chinese herbology Tilden's Extract References Further reading External links , links to websites about medical cannabis Information on Cannabis and Cannabinoids from the U.S. National Cancer Institute Information on cannabis (marihuana, marijuana) and the cannabinoids from Health Canada The Center for Medicinal Cannabis Research of the University of California Medical Marijuana – a 2014–2015 three-part CNN documentary produced by Sanjay Gupta Antiemetics Antioxidants Biologically based therapies Herbalism Medical ethics Medicinal plants Pharmaceuticals policy Pharmacognosy
Medical cannabis
[ "Chemistry" ]
6,538
[ "Pharmacology", "Pharmacognosy" ]
175,470
https://en.wikipedia.org/wiki/Magnetic%20monopole
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa). A magnetic monopole would have a net north or south "magnetic charge". Modern interest in the concept stems from particle theories, notably the grand unified and superstring theories, which predict their existence. The known elementary particles that have electric charge are electric monopoles. Magnetism in bar magnets and electromagnets is not caused by magnetic monopoles, and indeed, there is no known experimental or observational evidence that magnetic monopoles exist. Some condensed matter systems contain effective (non-isolated) magnetic monopole quasi-particles, or contain phenomena that are mathematically analogous to magnetic monopoles. Historical background Early science and classical physics Many early scientists attributed the magnetism of lodestones to two different "magnetic fluids" ("effluvia"), a north-pole fluid at one end and a south-pole fluid at the other, which attracted and repelled each other in analogy to positive and negative electric charge. However, an improved understanding of electromagnetism in the nineteenth century showed that the magnetism of lodestones was properly explained not by magnetic monopole fluids, but rather by a combination of electric currents, the electron magnetic moment, and the magnetic moments of other particles. Gauss's law for magnetism, one of Maxwell's equations, is the mathematical statement that magnetic monopoles do not exist. Nevertheless, Pierre Curie pointed out in 1894 that magnetic monopoles could conceivably exist, despite not having been seen so far. Quantum mechanics The quantum theory of magnetic charge started with a paper by the physicist Paul Dirac in 1931. In this paper, Dirac showed that if any magnetic monopoles exist in the universe, then all electric charge in the universe must be quantized (Dirac quantization condition). The electric charge is, in fact, quantized, which is consistent with (but does not prove) the existence of monopoles. Since Dirac's paper, several systematic monopole searches have been performed. Experiments in 1975 and 1982 produced candidate events that were initially interpreted as monopoles, but are now regarded as inconclusive. Therefore, whether monopoles exist remains an open question. Further advances in theoretical particle physics, particularly developments in grand unified theories and quantum gravity, have led to more compelling arguments (detailed below) that monopoles do exist. Joseph Polchinski, a string theorist, described the existence of monopoles as "one of the safest bets that one can make about physics not yet seen". These theories are not necessarily inconsistent with the experimental evidence. In some theoretical models, magnetic monopoles are unlikely to be observed, because they are too massive to create in particle accelerators (see below), and also too rare in the Universe to enter a particle detector with much probability. Some condensed matter systems propose a structure superficially similar to a magnetic monopole, known as a flux tube. The ends of a flux tube form a magnetic dipole, but since they move independently, they can be treated for many purposes as independent magnetic monopole quasiparticles. Since 2009, numerous news reports from the popular media have incorrectly described these systems as the long-awaited discovery of the magnetic monopoles, but the two phenomena are only superficially related to one another. These condensed-matter systems remain an area of active research. (See below.) Poles and magnetism in ordinary matter All matter isolated to date, including every atom on the periodic table and every particle in the Standard Model, has zero magnetic monopole charge. Therefore, the ordinary phenomena of magnetism and magnets do not derive from magnetic monopoles. Instead, magnetism in ordinary matter is due to two sources. First, electric currents create magnetic fields according to Ampère's law. Second, many elementary particles have an intrinsic magnetic moment, the most important of which is the electron magnetic dipole moment, which is related to its quantum-mechanical spin. Mathematically, the magnetic field of an object is often described in terms of a multipole expansion. This is an expression of the field as the sum of component fields with specific mathematical forms. The first term in the expansion is called the monopole term, the second is called dipole, then quadrupole, then octupole, and so on. Any of these terms can be present in the multipole expansion of an electric field, for example. However, in the multipole expansion of a magnetic field, the "monopole" term is always exactly zero (for ordinary matter). A magnetic monopole, if it exists, would have the defining property of producing a magnetic field whose monopole term is non-zero. A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansion. The term dipole means two poles, corresponding to the fact that a dipole magnet typically contains a north pole on one side and a south pole on the other side. This is analogous to an electric dipole, which has positive charge on one side and negative charge on the other. However, an electric dipole and magnetic dipole are fundamentally quite different. In an electric dipole made of ordinary matter, the positive charge is made of protons and the negative charge is made of electrons, but a magnetic dipole does not have different types of matter creating the north pole and south pole. Instead, the two magnetic poles arise simultaneously from the aggregate effect of all the currents and intrinsic moments throughout the magnet. Because of this, the two poles of a magnetic dipole must always have equal and opposite strength, and the two poles cannot be separated from each other. Maxwell's equations Maxwell's equations of electromagnetism relate the electric and magnetic fields to each other and to the distribution of electric charge and current. The standard equations provide for electric charge, but they posit zero magnetic charge and current. Except for this constraint, the equations are symmetric under the interchange of the electric and magnetic fields. Maxwell's equations are symmetric when the charge and electric current density are zero everywhere, as in vacuum. Maxwell's equations can also be written in a fully symmetric form if one allows for "magnetic charge" analogous to electric charge. With the inclusion of a variable for the density of magnetic charge, say , there is also a "magnetic current density" variable in the equations, . If magnetic charge does not exist – or if it exists but is absent in a region of space – then the new terms in Maxwell's equations are all zero, and the extended equations reduce to the conventional equations of electromagnetism such as (where is the divergence operator and is the magnetic flux density). In Gaussian cgs units The extended Maxwell's equations are as follows, in CGS-Gaussian units: In these equations is the magnetic charge density, is the magnetic current density, and is the magnetic charge of a test particle, all defined analogously to the related quantities of electric charge and current; is the particle's velocity and is the speed of light. For all other definitions and details, see Maxwell's equations. For the equations in nondimensionalized form, remove the factors of . In SI units In the International System of Quantities used with the SI, there are two conventions for defining magnetic charge , each with different units: weber (Wb) and ampere-meter (A⋅m). The conversion between them is , since the units are , where H is the henry – the SI unit of inductance. Maxwell's equations then take the following forms (using the same notation above): Potential formulation Maxwell's equations can also be expressed in terms of potentials as follows: where Tensor formulation Maxwell's equations in the language of tensors makes Lorentz covariance clear. We introduce electromagnetic tensors and preliminary four-vectors in this article as follows: where: The signature of the Minkowski metric is . The electromagnetic tensor and its Hodge dual are antisymmetric tensors: The generalized equations are: Alternatively, where the is the Levi-Civita symbol. Duality transformation The generalized Maxwell's equations possess a certain symmetry, called a duality transformation. One can choose any real angle , and simultaneously change the fields and charges everywhere in the universe as follows (in Gaussian units): where the primed quantities are the charges and fields before the transformation, and the unprimed quantities are after the transformation. The fields and charges after this transformation still obey the same Maxwell's equations. Because of the duality transformation, one cannot uniquely decide whether a particle has an electric charge, a magnetic charge, or both, just by observing its behavior and comparing that to Maxwell's equations. For example, it is merely a convention, not a requirement of Maxwell's equations, that electrons have electric charge but not magnetic charge; after a transformation, it would be the other way around. The key empirical fact is that all particles ever observed have the same ratio of magnetic charge to electric charge. Duality transformations can change the ratio to any arbitrary numerical value, but cannot change that all particles have the same ratio. Since this is the case, a duality transformation can be made that sets this ratio at zero, so that all particles have no magnetic charge. This choice underlies the "conventional" definitions of electricity and magnetism. Dirac's quantization One of the defining advances in quantum theory was Paul Dirac's work on developing a relativistic quantum electromagnetism. Before his formulation, the presence of electric charge was simply inserted into the equations of quantum mechanics (QM), but in 1931 Dirac showed that a discrete charge is implied by QM. That is to say, we can maintain the form of Maxwell's equations and still have magnetic charges. Consider a system consisting of a single stationary electric monopole (an electron, say) and a single stationary magnetic monopole, which would not exert any forces on each other. Classically, the electromagnetic field surrounding them has a momentum density given by the Poynting vector, and it also has a total angular momentum, which is proportional to the product , and is independent of the distance between them. Quantum mechanics dictates, however, that angular momentum is quantized as a multiple of , so therefore the product must also be quantized. This means that if even a single magnetic monopole existed in the universe, and the form of Maxwell's equations is valid, all electric charges would then be quantized. Although it would be possible simply to integrate over all space to find the total angular momentum in the above example, Dirac took a different approach. This led him to new ideas. He considered a point-like magnetic charge whose magnetic field behaves as and is directed in the radial direction, located at the origin. Because the divergence of is equal to zero everywhere except for the locus of the magnetic monopole at , one can locally define the vector potential such that the curl of the vector potential equals the magnetic field . However, the vector potential cannot be defined globally precisely because the divergence of the magnetic field is proportional to the Dirac delta function at the origin. We must define one set of functions for the vector potential on the "northern hemisphere" (the half-space above the particle), and another set of functions for the "southern hemisphere". These two vector potentials are matched at the "equator" (the plane through the particle), and they differ by a gauge transformation. The wave function of an electrically charged particle (a "probe charge") that orbits the "equator" generally changes by a phase, much like in the Aharonov–Bohm effect. This phase is proportional to the electric charge of the probe, as well as to the magnetic charge of the source. Dirac was originally considering an electron whose wave function is described by the Dirac equation. Because the electron returns to the same point after the full trip around the equator, the phase of its wave function must be unchanged, which implies that the phase added to the wave function must be a multiple of . This is known as the Dirac quantization condition. In various units, this condition can be expressed as: {| class="wikitable" |- ! Units ! Condition |- | SI units (weber convention) | |- | SI units (ampere-meter convention) | |- | Gaussian-cgs units | |- |} where is the vacuum permittivity, is the reduced Planck constant, is the speed of light, and is the set of integers. The hypothetical existence of a magnetic monopole would imply that the electric charge must be quantized in certain units; also, the existence of the electric charges implies that the magnetic charges of the hypothetical magnetic monopoles, if they exist, must be quantized in units inversely proportional to the elementary electric charge. At the time it was not clear if such a thing existed, or even had to. After all, another theory could come along that would explain charge quantization without need for the monopole. The concept remained something of a curiosity. However, in the time since the publication of this seminal work, no other widely accepted explanation of charge quantization has appeared. (The concept of local gauge invariance—see Gauge theory—provides a natural explanation of charge quantization, without invoking the need for magnetic monopoles; but only if the U(1) gauge group is compact, in which case we have magnetic monopoles anyway.) If we maximally extend the definition of the vector potential for the southern hemisphere, it is defined everywhere except for a semi-infinite line stretched from the origin in the direction towards the northern pole. This semi-infinite line is called the Dirac string and its effect on the wave function is analogous to the effect of the solenoid in the Aharonov–Bohm effect. The quantization condition comes from the requirement that the phases around the Dirac string are trivial, which means that the Dirac string must be unphysical. The Dirac string is merely an artifact of the coordinate chart used and should not be taken seriously. The Dirac monopole is a singular solution of Maxwell's equation (because it requires removing the worldline from spacetime); in more sophisticated theories, it is superseded by a smooth solution such as the 't Hooft–Polyakov monopole. Topological interpretation Dirac string A gauge theory like electromagnetism is defined by a gauge field, which associates a group element to each path in space time. For infinitesimal paths, the group element is close to the identity, while for longer paths the group element is the successive product of the infinitesimal group elements along the way. In electrodynamics, the group is U(1), unit complex numbers under multiplication. For infinitesimal paths, the group element is which implies that for finite paths parametrized by , the group element is: The map from paths to group elements is called the Wilson loop or the holonomy, and for a U(1) gauge group it is the phase factor which the wavefunction of a charged particle acquires as it traverses the path. For a loop: So that the phase a charged particle gets when going in a loop is the magnetic flux through the loop. When a small solenoid has a magnetic flux, there are interference fringes for charged particles which go around the solenoid, or around different sides of the solenoid, which reveal its presence. But if all particle charges are integer multiples of , solenoids with a flux of have no interference fringes, because the phase factor for any charged particle is . Such a solenoid, if thin enough, is quantum-mechanically invisible. If such a solenoid were to carry a flux of , when the flux leaked out from one of its ends it would be indistinguishable from a monopole. Dirac's monopole solution in fact describes an infinitesimal line solenoid ending at a point, and the location of the solenoid is the singular part of the solution, the Dirac string. Dirac strings link monopoles and antimonopoles of opposite magnetic charge, although in Dirac's version, the string just goes off to infinity. The string is unobservable, so you can put it anywhere, and by using two coordinate patches, the field in each patch can be made nonsingular by sliding the string to where it cannot be seen. Grand unified theories In a U(1) gauge group with quantized charge, the group is a circle of radius . Such a U(1) gauge group is called compact. Any U(1) that comes from a grand unified theory (GUT) is compact – because only compact higher gauge groups make sense. The size of the gauge group is a measure of the inverse coupling constant, so that in the limit of a large-volume gauge group, the interaction of any fixed representation goes to zero. The case of the U(1) gauge group is a special case because all its irreducible representations are of the same size – the charge is bigger by an integer amount, but the field is still just a complex number – so that in U(1) gauge field theory it is possible to take the decompactified limit with no contradiction. The quantum of charge becomes small, but each charged particle has a huge number of charge quanta so its charge stays finite. In a non-compact U(1) gauge group theory, the charges of particles are generically not integer multiples of a single unit. Since charge quantization is an experimental certainty, it is clear that the U(1) gauge group of electromagnetism is compact. GUTs lead to compact U(1) gauge groups, so they explain charge quantization in a way that seems logically independent from magnetic monopoles. However, the explanation is essentially the same, because in any GUT that breaks down into a U(1) gauge group at long distances, there are magnetic monopoles. The argument is topological: The holonomy of a gauge field maps loops to elements of the gauge group. Infinitesimal loops are mapped to group elements infinitesimally close to the identity. If you imagine a big sphere in space, you can deform an infinitesimal loop that starts and ends at the north pole as follows: stretch out the loop over the western hemisphere until it becomes a great circle (which still starts and ends at the north pole) then let it shrink back to a little loop while going over the eastern hemisphere. This is called lassoing the sphere. Lassoing is a sequence of loops, so the holonomy maps it to a sequence of group elements, a continuous path in the gauge group. Since the loop at the beginning of the lassoing is the same as the loop at the end, the path in the group is closed. If the group path associated to the lassoing procedure winds around the U(1), the sphere contains magnetic charge. During the lassoing, the holonomy changes by the amount of magnetic flux through the sphere. Since the holonomy at the beginning and at the end is the identity, the total magnetic flux is quantized. The magnetic charge is proportional to the number of windings , the magnetic flux through the sphere is equal to . This is the Dirac quantization condition, and it is a topological condition that demands that the long distance U(1) gauge field configurations be consistent. When the U(1) gauge group comes from breaking a compact Lie group, the path that winds around the U(1) group enough times is topologically trivial in the big group. In a non-U(1) compact Lie group, the covering space is a Lie group with the same Lie algebra, but where all closed loops are contractible. Lie groups are homogeneous, so that any cycle in the group can be moved around so that it starts at the identity, then its lift to the covering group ends at , which is a lift of the identity. Going around the loop twice gets you to , three times to , all lifts of the identity. But there are only finitely many lifts of the identity, because the lifts can't accumulate. This number of times one has to traverse the loop to make it contractible is small, for example if the GUT group is SO(3), the covering group is SU(2), and going around any loop twice is enough. This means that there is a continuous gauge-field configuration in the GUT group allows the U(1) monopole configuration to unwind itself at short distances, at the cost of not staying in the U(1). To do this with as little energy as possible, you should leave only the U(1) gauge group in the neighborhood of one point, which is called the core of the monopole. Outside the core, the monopole has only magnetic field energy. Hence, the Dirac monopole is a topological defect in a compact U(1) gauge theory. When there is no GUT, the defect is a singularity – the core shrinks to a point. But when there is some sort of short-distance regulator on spacetime, the monopoles have a finite mass. Monopoles occur in lattice U(1), and there the core size is the lattice size. In general, they are expected to occur whenever there is a short-distance regulator. String theory In the universe, quantum gravity provides the regulator. When gravity is included, the monopole singularity can be a black hole, and for large magnetic charge and mass, the black hole mass is equal to the black hole charge, so that the mass of the magnetic black hole is not infinite. If the black hole can decay completely by Hawking radiation, the lightest charged particles cannot be too heavy. The lightest monopole should have a mass less than or comparable to its charge in natural units. So in a consistent holographic theory, of which string theory is the only known example, there are always finite-mass monopoles. For ordinary electromagnetism, the upper mass bound is not very useful because it is about same size as the Planck mass. Mathematical formulation In mathematics, a (classical) gauge field is defined as a connection over a principal G-bundle over spacetime. is the gauge group, and it acts on each fiber of the bundle separately. A connection on a -bundle tells you how to glue fibers together at nearby points of . It starts with a continuous symmetry group that acts on the fiber , and then it associates a group element with each infinitesimal path. Group multiplication along any path tells you how to move from one point on the bundle to another, by having the element associated to a path act on the fiber . In mathematics, the definition of bundle is designed to emphasize topology, so the notion of connection is added on as an afterthought. In physics, the connection is the fundamental physical object. One of the fundamental observations in the theory of characteristic classes in algebraic topology is that many homotopical structures of nontrivial principal bundles may be expressed as an integral of some polynomial over any connection over it. Note that a connection over a trivial bundle can never give us a nontrivial principal bundle. If spacetime is the space of all possible connections of the -bundle is connected. But consider what happens when we remove a timelike worldline from spacetime. The resulting spacetime is homotopically equivalent to the topological sphere . A principal -bundle over is defined by covering by two charts, each homeomorphic to the open 2-ball such that their intersection is homeomorphic to the strip . 2-balls are homotopically trivial and the strip is homotopically equivalent to the circle . So a topological classification of the possible connections is reduced to classifying the transition functions. The transition function maps the strip to , and the different ways of mapping a strip into are given by the first homotopy group of . So in the -bundle formulation, a gauge theory admits Dirac monopoles provided is not simply connected, whenever there are paths that go around the group that cannot be deformed to a constant path (a path whose image consists of a single point). U(1), which has quantized charges, is not simply connected and can have Dirac monopoles while , its universal covering group, is simply connected, doesn't have quantized charges and does not admit Dirac monopoles. The mathematical definition is equivalent to the physics definition provided that—following Dirac—gauge fields are allowed that are defined only patch-wise, and the gauge field on different patches are glued after a gauge transformation. The total magnetic flux is none other than the first Chern number of the principal bundle, and depends only upon the choice of the principal bundle, and not the specific connection over it. In other words, it is a topological invariant. This argument for monopoles is a restatement of the lasso argument for a pure U(1) theory. It generalizes to dimensions with in several ways. One way is to extend everything into the extra dimensions, so that U(1) monopoles become sheets of dimension . Another way is to examine the type of topological singularity at a point with the homotopy group . Grand unified theories In more recent years, a new class of theories has also suggested the existence of magnetic monopoles. During the early 1970s, the successes of quantum field theory and gauge theory in the development of electroweak theory and the mathematics of the strong nuclear force led many theorists to move on to attempt to combine them in a single theory known as a Grand Unified Theory (GUT). Several GUTs were proposed, most of which implied the presence of a real magnetic monopole particle. More accurately, GUTs predicted a range of particles known as dyons, of which the most basic state was a monopole. The charge on magnetic monopoles predicted by GUTs is either 1 or 2 gD, depending on the theory. The majority of particles appearing in any quantum field theory are unstable, and they decay into other particles in a variety of reactions that must satisfy various conservation laws. Stable particles are stable because there are no lighter particles into which they can decay and still satisfy the conservation laws. For instance, the electron has a lepton number of one and an electric charge of one, and there are no lighter particles that conserve these values. On the other hand, the muon, essentially a heavy electron, can decay into the electron plus two quanta of energy, and hence it is not stable. The dyons in these GUTs are also stable, but for an entirely different reason. The dyons are expected to exist as a side effect of the "freezing out" of the conditions of the early universe, or a symmetry breaking. In this scenario, the dyons arise due to the configuration of the vacuum in a particular area of the universe, according to the original Dirac theory. They remain stable not because of a conservation condition, but because there is no simpler topological state into which they can decay. The length scale over which this special vacuum configuration exists is called the correlation length of the system. A correlation length cannot be larger than causality would allow, therefore the correlation length for making magnetic monopoles must be at least as big as the horizon size determined by the metric of the expanding universe. According to that logic, there should be at least one magnetic monopole per horizon volume as it was when the symmetry breaking took place. Cosmological models of the events following the Big Bang make predictions about what the horizon volume was, which lead to predictions about present-day monopole density. Early models predicted an enormous density of monopoles, in clear contradiction to the experimental evidence. This was called the "monopole problem". Its widely accepted resolution was not a change in the particle-physics prediction of monopoles, but rather in the cosmological models used to infer their present-day density. Specifically, more recent theories of cosmic inflation drastically reduce the predicted number of magnetic monopoles, to a density small enough to make it unsurprising that humans have never seen one. This resolution of the "monopole problem" was regarded as a success of cosmic inflation theory. (However, of course, it is only a noteworthy success if the particle-physics monopole prediction is correct.) For these reasons, monopoles became a major interest in the 1970s and 80s, along with the other "approachable" predictions of GUTs such as proton decay. Many of the other particles predicted by these GUTs were beyond the abilities of current experiments to detect. For instance, a wide class of particles known as the X and Y bosons are predicted to mediate the coupling of the electroweak and strong forces, but these particles are extremely heavy and well beyond the capabilities of any reasonable particle accelerator to create. Searches for magnetic monopoles Experimental searches for magnetic monopoles can be placed in one of two categories: those that try to detect preexisting magnetic monopoles and those that try to create and detect new magnetic monopoles. Passing a magnetic monopole through a coil of wire induces a net current in the coil. This is not the case for a magnetic dipole or higher order magnetic pole, for which the net induced current is zero, and hence the effect can be used as an unambiguous test for the presence of magnetic monopoles. In a wire with finite resistance, the induced current quickly dissipates its energy as heat, but in a superconducting loop the induced current is long-lived. By using a highly sensitive "superconducting quantum interference device" (SQUID) one can, in principle, detect even a single magnetic monopole. According to standard inflationary cosmology, magnetic monopoles produced before inflation would have been diluted to an extremely low density today. Magnetic monopoles may also have been produced thermally after inflation, during the period of reheating. However, the current bounds on the reheating temperature span 18 orders of magnitude and as a consequence the density of magnetic monopoles today is not well constrained by theory. There have been many searches for preexisting magnetic monopoles. Although there has been one tantalizing event recorded, by Blas Cabrera Navarro on the night of February 14, 1982 (thus, sometimes referred to as the "Valentine's Day Monopole"), there has never been reproducible evidence for the existence of magnetic monopoles. The lack of such events places an upper limit on the number of monopoles of about one monopole per 1029 nucleons. Another experiment in 1975 resulted in the announcement of the detection of a moving magnetic monopole in cosmic rays by the team led by P. Buford Price. Price later retracted his claim, and a possible alternative explanation was offered by Luis Walter Alvarez. In his paper it was demonstrated that the path of the cosmic ray event that was claimed due to a magnetic monopole could be reproduced by the path followed by a platinum nucleus decaying first to osmium, and then to tantalum. High-energy particle colliders have been used to try to create magnetic monopoles. Due to the conservation of magnetic charge, magnetic monopoles must be created in pairs, one north and one south. Due to conservation of energy, only magnetic monopoles with masses less than half of the center of mass energy of the colliding particles can be produced. Beyond this, very little is known theoretically about the creation of magnetic monopoles in high-energy particle collisions. This is due to their large magnetic charge, which invalidates all the usual calculational techniques. As a consequence, collider-based searches for magnetic monopoles cannot, as yet, provide lower bounds on the mass of magnetic monopoles. They can however provide upper bounds on the probability (or cross section) of pair production, as a function of energy. The ATLAS experiment at the Large Hadron Collider currently has the most stringent cross section limits for magnetic monopoles of 1 and 2 Dirac charges, produced through Drell–Yan pair production. A team led by Wendy Taylor searches for these particles based on theories that define them as long lived (they do not quickly decay), as well as being highly ionizing (their interaction with matter is predominantly ionizing). In 2019 the search for magnetic monopoles in the ATLAS detector reported its first results from data collected from the LHC Run 2 collisions at center of mass energy of 13 TeV, which at 34.4 fb−1 is the largest dataset analyzed to date. The MoEDAL experiment, installed at the Large Hadron Collider, is currently searching for magnetic monopoles and large supersymmetric particles using nuclear track detectors and aluminum bars around LHCb's VELO detector. The particles it is looking for damage the plastic sheets that comprise the nuclear track detectors along their path, with various identifying features. Further, the aluminum bars can trap sufficiently slowly moving magnetic monopoles. The bars can then be analyzed by passing them through a SQUID. "Monopoles" in condensed-matter systems Since around 2003, various condensed-matter physics groups have used the term "magnetic monopole" to describe a different and largely unrelated phenomenon. A true magnetic monopole would be a new elementary particle, and would violate Gauss's law for magnetism . A monopole of this kind, which would help to explain the law of charge quantization as formulated by Paul Dirac in 1931, has never been observed in experiments. The monopoles studied by condensed-matter groups have none of these properties. They are not a new elementary particle, but rather are an emergent phenomenon in systems of everyday particles (protons, neutrons, electrons, photons); in other words, they are quasi-particles. They are not sources for the -field (i.e., they do not violate ); instead, they are sources for other fields, for example the -field, the "-field" (related to superfluid vorticity), or various other quantum fields. They are not directly relevant to grand unified theories or other aspects of particle physics, and do not help explain charge quantization—except insofar as studies of analogous situations can help confirm that the mathematical analyses involved are sound. There are a number of examples in condensed-matter physics where collective behavior leads to emergent phenomena that resemble magnetic monopoles in certain respects, including most prominently the spin ice materials. While these should not be confused with hypothetical elementary monopoles existing in the vacuum, they nonetheless have similar properties and can be probed using similar techniques. Some researchers use the term magnetricity to describe the manipulation of magnetic monopole quasiparticles in spin ice, in analogy to the word "electricity". One example of the work on magnetic monopole quasiparticles is a paper published in the journal Science in September 2009, in which researchers described the observation of quasiparticles resembling magnetic monopoles. A single crystal of the spin ice material dysprosium titanate was cooled to a temperature between 0.6 kelvin and 2.0 kelvin. Using observations of neutron scattering, the magnetic moments were shown to align into interwoven tubelike bundles resembling Dirac strings. At the defect formed by the end of each tube, the magnetic field looks like that of a monopole. Using an applied magnetic field to break the symmetry of the system, the researchers were able to control the density and orientation of these strings. A contribution to the heat capacity of the system from an effective gas of these quasiparticles was also described. This research went on to win the 2012 Europhysics Prize for condensed matter physics. In another example, a paper in the February 11, 2011 issue of Nature Physics describes creation and measurement of long-lived magnetic monopole quasiparticle currents in spin ice. By applying a magnetic-field pulse to crystal of dysprosium titanate at 0.36 K, the authors created a relaxing magnetic current that lasted for several minutes. They measured the current by means of the electromotive force it induced in a solenoid coupled to a sensitive amplifier, and quantitatively described it using a chemical kinetic model of point-like charges obeying the Onsager–Wien mechanism of carrier dissociation and recombination. They thus derived the microscopic parameters of monopole motion in spin ice and identified the distinct roles of free and bound magnetic charges. In superfluids, there is a field , related to superfluid vorticity, which is mathematically analogous to the magnetic -field. Because of the similarity, the field is called a "synthetic magnetic field". In January 2014, it was reported that monopole quasiparticles for the field were created and studied in a spinor Bose–Einstein condensate. This constitutes the first example of a quasi-magnetic monopole observed within a system governed by quantum field theory. Updates to the theoretical and experimental searches in matter can be found in the reports by G. Giacomelli (2000) and by S. Balestra (2011) in the Bibliography section. See also Bogomolny equations Dirac string Dyon Felix Ehrenhaft Flatness problem Gauss's law for magnetism Ginzburg–Landau theory Halbach array Horizon problem Instanton Magnetic monopole problem Meron Soliton 't Hooft–Polyakov monopole Wu–Yang monopole Magnetic current Notes References Bibliography External links Hypothetical elementary particles Magnetism Gauge theories Hypothetical particles Unsolved problems in physics
Magnetic monopole
[ "Physics", "Astronomy" ]
7,807
[ "Hypothetical particles", "Matter", "Magnetic monopoles", "Astronomical hypotheses", "Unsolved problems in physics", "Hypothetical elementary particles", "Physics beyond the Standard Model", "Subatomic particles" ]
175,476
https://en.wikipedia.org/wiki/Pedestrian
A pedestrian is a person traveling on foot, whether walking or running. In modern times, the term usually refers to someone walking on a road or pavement (US: sidewalk), but this was not the case historically. Pedestrians may also be wheelchair users or other disabled people who use mobility aids. Etymology The meaning of pedestrian is displayed with the morphemes ped- ('foot') and -ian ('characteristic of'). This word is derived from the Latin term pedester ('going on foot') and was first used (in the English language) during the 18th century. It was originally used, and can still be used today, as an adjective meaning plain or dull. However, in this article it takes on its noun form and refers to someone who walks. The word pedestrian may have been used in middle French in the Recueil des Croniques et Anchiennes Istories de la Grant Bretaigne. History Walking has always been the primary means of human locomotion. The first humans to migrate from Africa, about 60,000 years ago, walked. They walked along the coast of India to reach Australia. They walked across Asia to reach the Americas, and from Central Asia into Europe. With the advent of the cars at the beginning of the 20th century, the main story is that the cars took over, and "people chose the car", but there were many groups and movements that held on to walking as their preferred means of daily transport and some who organised to promote walking, and to counterbalance the widely-held view that often favoured cars, e.g. as related by Peter Norton. During the 18th and 19th centuries, pedestrianism (walking) was non a popular spectator sport, just as equestrianism (riding) still is in places. One of the most famous pedestrians of that period was Captain Robert Barclay Allardice, known as "The Celebrated Pedestrian", of Stonehaven in Scotland. His most impressive feat was to walk every hour for 1000 hours, which he achieved between 1 June and 12 July 1809. This feat captured many people's imagination, and around 10,000 people came to watch over the course of the event. During the rest of the 19th century, many people tried to repeat this feat, including Ada Anderson who developed it further and walked a half-mile (800 m) each quarter-hour over the 1000 hours. Since the 20th century, interest in walking as a sport has dropped. Racewalking is still an Olympic sport, but fails to catch public attention as it did. However major walking feats are still performed, such as the Land's End to John o' Groats walk in the United Kingdom, and the traversal of North America from coast to coast. The first person to walk around the world was Dave Kunst who started his walk traveling east from Waseca, Minnesota on 20 June 1970 and completed his journey on 5 October 1974, when he re-entered the town from the west. These feats are often tied to charitable fundraising and are undertaken, among others, by celebrities such as Sir Jimmy Savile and Ian Botham. Footpaths and roads Outdoor pedestrian networks Roads often have a designated footpath for pedestrian traffic, called the sidewalk in North American English, the pavement in British English, and the footpath in Australian and New Zealand English. There are also footpaths not associated with a road; these include urban short cuts and also rural paths used mainly by ramblers, hikers, or hill-walkers. Footpaths in mountainous or forested areas may also be called trails. Pedestrians share some footpaths with horses and bicycles: these paths may be known as bridleways. Other byways used by walkers are also accessible to vehicles. There are also many roads with no footpath. Some modern towns (such as the new suburbs of Peterborough in England) are designed with the network of footpaths and cycle paths almost entirely separate from the road network. The term trail is also used by the authorities in some countries to mean any footpath that is not attached to a road or street. If such footpaths are in urban environments and are meant for both pedestrians and pedal cyclists, they can be called shared use paths or multi-use paths in general and official usage. нуПЬ Some shopping streets are for pedestrians only. Some roads have special pedestrian crossings. A bridge solely for pedestrians is a footbridge. In Britain, regardless of whether there is a footpath, pedestrians have the legal right to use most public roads, excluding motorways and some toll tunnels and bridges such as the Blackwall Tunnel and the Dartford Crossing — although sometimes it may endanger the pedestrian and other road users. The UK Highway Code advises that pedestrians should walk in the opposite direction to oncoming traffic on a road with no footpath. Indoor pedestrian networks Indoor pedestrian networks connect the different rooms or spaces of a building. Airports, museums, campuses, hospitals and shopping centres might have tools allowing for the computation of the shortest paths between two destinations. Their increasing availability is due to the complexity of path finding in these facilities. Different mapping tools, such as OpenStreetMap, are extending to indoor spaces. Pedestrianisation Pedestrianisation might be considered as a process of removing vehicular traffic from city streets or restricting vehicular access to streets for use by pedestrians, to improve the environment and safety. Efforts are under way by pedestrian advocacy groups to restore pedestrian access to new developments, especially to counteract newer developments, 20% to 30% of which in the United States do not include footpaths. Some activists advocate large pedestrian zones where only pedestrians, or pedestrians and some non-motorised vehicles, are allowed. Many urbanists have extolled the virtues of pedestrian streets in urban areas. In the US the proportion of households without a car is 8%, but a notable exception is New York City, the only locality in the United States where more than half of all households do not own a car (the figure is even higher in Manhattan, over 75%). The use of cars for short journeys is officially discouraged in many parts of the world, and construction or separation of dedicated walking routes in city centres receives a high priority in many large cities in Western Europe, often in conjunction with public transport enhancements. In Copenhagen, the world's longest pedestrian shopping area, Strøget, has been developed over the last 40 years, principally due to the work of Danish architect Jan Gehl, a principle of urban design known as copenhagenisation. Safety issues Safety is an important issue where cars can cross the pedestrian way. Drivers and pedestrians share some responsibility for improving safety of road users. Road traffic crashes are not inevitable; they are both predictable and preventable. Key risks for pedestrians are well known. Among the well-documented factors are driver behaviour (including speeding and drunk driving); infrastructure missing facilities (including pavements, crossings and islands); and vehicle designs which are not forgiving to pedestrians struck by a vehicle. The Traffic Injury Research Foundation describes pedestrians as vulnerable road users because they are not protected in the same way as occupants of motor vehicles. There is an increasing focus on pedestrians versus motor vehicles in many countries. Most pedestrian injuries occur while they are crossing a street. Most crashes involving a pedestrian occur at night. Most pedestrian fatalities are killed by a frontal impact. In such a situation, an adult pedestrian is struck by a car front (for instance, the bumper touches either the leg or knee-joint area), accelerating the lower part of the body forward while "the upper body is rotated and accelerated relative to the car," at which point the pelvis and thorax are hit. Then the head hits the windscreen at the velocity of the striking car. Finally, the victim falls to the ground. Research has shown that urban crimes, or the mere perception of crimes, severely affect the mental and physical health of pedestrians. Inter-pedestrian behaviour, without the involvement of vehicles, is also a key factor to pedestrian safety. Some special interest groups consider pedestrian fatalities on American roads a carnage. Five states – Arizona, California, Florida, Georgia and Texas – are the site of 46% of all pedestrian deaths in the country. The advent of SUVs is considered a leading cause; speculation of other factors includes population growth, driver distraction with mobile phones, poor street lighting, alcohol and drugs and speeding. Cities have had mixed results in addressing pedestrian safety with Vision zero plan: Los Angeles fails while NYC has had success. Nonetheless, in the US, some pedestrians have just 40 seconds to cross a street 10 lanes wide. Pedestrian fatalities are much more common in accident situations in the European Union than in the United States. In the European Union countries, more than 200,000 pedestrians and cyclists are injured annually. Also, each year, more than 270 000 pedestrians lose their lives on the world's roads. At a global level pedestrians constitute 22% of all road deaths, but might be two-thirds in some countries. Pedestrian fatalities, in 2016, were 2.6 per million population in the Netherlands, 4.3 in Sweden, 4.5 in Wales, 5.3 in New Zealand, 6.0 in Germany; 7.1 in the whole United Kingdom, 7.5 in Australia, 8.4 in France, 8.4 in Spain, 9.4 in Italy, 11.1 in Israel, 13 in Japan, 13.8 in Greece, 18.5 in the United States, 22.9 in Poland, and 36.3 in Romania. Safety trends Road design impact on safety It is well documented that a minor increase in speed might greatly increase the likelihood of a crash, and exacerbate resulting casualties. For this reason, the recommended maximum speed is or in residential and high pedestrian traffic areas, with enforced traffic rules on speed limits and traffic-calming measures. The design of road and streets plays a key role in pedestrian safety. Roads are too often designed for motorized vehicles, without taking into account pedestrian and bicycle needs. The non-existence of sidewalk and signals increases risk for pedestrians. This defect might more easily be observed on arterial roadways, intersections and fast-speed lanes without adequate attention to pedestrian facilities. For instance, an assessment of roads in countries from many continents shows that 84% of roads are without pedestrian footpaths, while maximum limited speed is greater than 40 km/h. Among the factors which reduce road safety for pedestrians are wider lanes, roadway widening, and roadways designed for higher speeds and with increased numbers of traffic lanes. For this reason, some European cities such as Freiburg (Germany) have lowered the speed limit to 30 km/h on 90% of its streets, to reduce risk for its 15 000 people. With such policy, 24% of daily trips are performed by foot, against 28% by bicycles, 20% by public transport and 28% by car. (See Zone 30.) A similar set of policies to discourage the use of cars and increase safety for pedestrians has been implemented by the Northern European capitals of Oslo and Helsinki. In 2019, this resulted in both cities counting zero pedestrian deaths for the first time. Seasonality In Europe, pedestrian fatalities have a seasonal factor, with 6% of annual fatalities occurring in April but 13% (twice more) in December. The rationale for such a change might be complex. Health benefits and environment Regular walking is important both for human health and for the natural environment. Frequent exercise such as walking tends to reduce the chance of obesity and related medical problems. In contrast, using a car for short trips tends to contribute both to obesity and via vehicle emissions to climate change: internal combustion engines are more inefficient and highly polluting during their first minutes of operation (engine cold start). General availability of public transportation encourages walking, as it will not, in most cases, take one directly to one's destination. Unicode In Unicode, the hexadecimal code for "pedestrian" is 1F6B6. In XML and HTML, the string &#x1F6B6; produces 🚶. See also Dérive aimless walking usually through city streets Footpath Jaywalking Junior safety patrol List of U.S. cities with most pedestrian commuters Pedestrian village Pedestrian zone Street reclamation Traffic calming Trail ethics Walkability Walking audit Walking References External links Transport terminology Walking
Pedestrian
[ "Physics" ]
2,511
[ "Physical systems", "Transport", "Transport terminology" ]
175,499
https://en.wikipedia.org/wiki/Max%20Tegmark
Max Erik Tegmark (born 5 May 1967) is a Swedish-American physicist, machine learning researcher and author. He is best known for his book Life 3.0 about what the world might look like as artificial intelligence continues to improve. Tegmark is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute. Biography Early life Tegmark was born in Sweden to Karin Tegmark and Jewish American-born professor of mathematics Harold S. Shapiro. While in high school, he and a friend created and sold a word processor written in pure machine code for the Swedish eight-bit computer ABC 80, and a 3D Tetris-like game called Frac. Tegmark left Sweden in 1990 after receiving his M.S.E in engineering physics from the KTH Royal Institute of Technology and a B.A. in economics the previous year at the Stockholm School of Economics. His first academic venture beyond Scandinavia brought him to California, where he studied physics at the University of California, Berkeley, earning his M.A. in 1992, and Ph.D. in 1994 under the supervision of Joseph Silk. Tegmark was an assistant professor at the University of Pennsylvania, receiving tenure in 2003. In 2004, he joined the Massachusetts Institute of Technology's department of physics. Career His research has focused on cosmology, combining theoretical work with new measurements to place constraints on cosmological models and their free parameters, often in collaboration with experimentalists. He has over 200 publications, of which nine have been cited over 500 times. He has developed data analysis tools based on information theory and applied them to cosmic microwave background experiments such as COBE, QMAP, and WMAP, and to galaxy redshift surveys such as the Las Campanas Redshift Survey, the 2dF Survey and the Sloan Digital Sky Survey. With Daniel Eisenstein and Wayne Hu, he introduced the idea of using baryon acoustic oscillations as a standard ruler. With Angelica de Oliveira-Costa and Andrew Hamilton, he discovered the anomalous multipole alignment in the WMAP data sometimes referred to as the "axis of evil". With Anthony Aguirre, he developed the cosmological interpretation of quantum mechanics. His 2000 paper on quantum decoherence of neurons concluded that decoherence seems too rapid for Roger Penrose's "quantum microtubule" model of consciousness to be viable. Tegmark has also formulated the "mathematical universe hypothesis", whose only postulate is that "all structures that exist mathematically exist also physically". In 2014, Tegmark published the book Our Mathematical Universe, which presents his idea at greater length. Tegmark suggests that the theory is simple in having no free parameters at all, and that in those structures complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically "real" world. The "mathematical universe" hypothesis has been criticized by some other scientists as being both overly speculative and unscientific in nature. For example, mathematical physicist Edward Frenkel characterized it as closer to "science fiction and mysticism" than "the realm of science." Tegmark was elected Fellow of the American Physical Society in 2012 for, according to the citation, "his contributions to cosmology, including precision measurements from cosmic microwave background and galaxy clustering data, tests of inflation and gravitation theories, and the development of a new technology for low-frequency radio interferometry". He was awarded the Royal Swedish Academy of Engineering Science's Gold Medal in 2019 for, according to the citation, "his contributions to our understanding of humanity’s place in the cosmos and the opportunities and risks associated with artificial intelligence. He has courageously tackled these existential questions in his research and, in a commendable way, succeeded in communicating the issues to a wider public." Tegmark is interviewed in the 2018 documentary on artificial intelligence Do You Trust This Computer? From 2020 to 2023, Tegmark led a research team-turned-nonprofit at MIT that developed an AI-driven news aggregator known as "Improving the News". "Improve the News" was rebranded to "Verity News" in 2023. Personal life He married astrophysicist Angelica de Oliveira-Costa in 1997, and divorced in 2009. They have two sons. On August 5, 2012, Tegmark married Meia Chita. In the media In 2006, Tegmark was one of fifty scientists interviewed by New Scientist about their predictions for the future. His prediction: "In 50 years, you may be able to buy T-shirts on which are printed equations describing the unified laws of our universes." Tegmark appears in the 2007 documentary Parallel Worlds, Parallel Lives in which he is interviewed by Mark Oliver Everett, son of the founder of the many-worlds interpretation of quantum mechanics, Hugh Everett. Tegmark also appears in "Who's Afraid of a Big Black Hole?", "What Time is It?", "To Infinity and Beyond", "Is Everything We Know About The Universe Wrong?", "What is Reality?" and "Which Universe Are We In?", all part of the BBC's Horizon scientific series of programmes. He appears in several episodes of Sci Fi Science: Physics of the Impossible, an American documentary television series on science which first aired in the United States on December 1, 2009. The series is hosted by theoretical physicist Michio Kaku. Tegmark was interviewed by Morgan Freeman in seasons 2 and 3 of Through the Wormhole in 2011–2012. Tegmark participated in the episode "Zooming Out" of BBC World Service's The Forum, which first aired on BBC Radio 4 on 26 April 2014. In 2014, Tegmark co-authored an op-ed in The Huffington Post with Stephen Hawking, Frank Wilczek and Stuart Russell on the movie Transcendence. In 2014, "The Perpetual Earth Program," a play based on Tegmark's book Our Mathematical Universe, was mounted in New York City as part of the Planet Connections Theatre Festival. In 2014, he featured in The Principle, a documentary examining the Copernican Principle. In 2015, Tegmark participated in an episode of Sam Harris' the Waking Up podcast entitled "The Multiverse & You (& You & You & You...)" where they discussed topics such as artificial intelligence and the mathematical universe hypothesis. In 2017, Tegmark gave a talk entitled "Effective altruism, existential risk & existential hope" at the world's largest annual conference of the effective altruism movement. In 2017, Tegmark participated in an episode of Sam Harris' the Waking Up podcast entitled "The Future of Intelligence" where they discussed topics such as artificial intelligence and definitions of life. In 2018, Tegmark took part in a conversation with AI researcher Lex Fridman about Artificial General Intelligence as part of a MIT course on AGI. He was the first guest on the Lex Fridman podcast. He was interviewed again on the Lex Fridman podcast in 2021 and in 2023. In 2023, Tegmark drew controversy in the media when reports surfaced that he had signed off on behalf of the Future of Life Institute on a $100,000 grant to far-right media outlet Nya Dagbladet. He later said that the Future of Life Institute "ultimately decided to reject it because of what our subsequent due diligence uncovered", that they rejected it long before the media became involved, and that the institute "finds Nazi, neo-Nazi or pro-Nazi groups or ideologies despicable and would never knowingly support them". An official statement from the Future of Life Institute further expands on this: "FLI finds groups or ideologies espousing antisemitism, white supremacy, or racism despicable and would never knowingly support any such group". In 2023, Time named Tegmark one of the 100 most influential people in AI. Works Our Mathematical Universe (2014) Life 3.0: Being Human in the Age of Artificial Intelligence (2017) See also List of astronomers List of physicists References External links 1967 births 21st-century American astronomers Swedish expatriates in the United States Swedish cosmologists Living people Massachusetts Institute of Technology School of Science faculty KTH Royal Institute of Technology alumni Stockholm School of Economics alumni 20th-century Swedish astronomers Swedish people of Jewish descent 20th-century American physicists MIT Center for Theoretical Physics faculty Quantum mind People associated with effective altruism Fellows of the American Physical Society AI safety scientists
Max Tegmark
[ "Physics" ]
1,786
[ "Quantum mind", "Quantum mechanics" ]
175,507
https://en.wikipedia.org/wiki/Emergency%20Alert%20System
The Emergency Alert System (EAS) is a national warning system in the United States designed to allow authorized officials to broadcast emergency alerts and warning messages to the public via cable, satellite and broadcast television and AM, FM and satellite radio. Informally, Emergency Alert System is sometimes conflated with its mobile phone counterpart Wireless Emergency Alerts (WEA), a different but related system. However, both the EAS and WEA, among other systems, are coordinated under the Integrated Public Alert and Warning System (IPAWS). The EAS, and more broadly IPAWS, allows federal, state, and local authorities to efficiently broadcast emergency alert and warning messages across multiple channels. The EAS became operational on January 1, 1997, after being approved by the Federal Communications Commission (FCC) in November 1994, replacing the Emergency Broadcast System (EBS), and largely supplanted Local Access Alert systems, though Local Access Alert systems are still used from time to time. Its main improvement over the EBS, and perhaps its most distinctive feature, is its application of a digitally encoded audio signal known as Specific Area Message Encoding (SAME), which is responsible for the characteristic "screeching" or "chirping" sounds at the start and end of each message. The first signal is the "header" which encodes, among other information, the alert type and locations, or the specific area that should receive the message. The last short burst marks the end-of-message. These signals are read by specialized encoder-decoder equipment. This design allows for automated station-to-station relay of alerts to only the area the alert was intended for. Like the Emergency Broadcast System, the system is primarily designed to allow the president of the United States to address the country via all radio and television stations in the event of a national emergency. Despite this, neither the system nor its predecessors have been used in this manner. The ubiquity of news coverage in these situations, such as during the September 11 attacks, has been credited to making usage of the system unnecessary or redundant. In practice, it is used at a regional scale to distribute information regarding imminent threats to public safety, such as severe weather situations (including flash floods and tornadoes), AMBER Alerts, and other civil emergencies. It is jointly coordinated by the Federal Emergency Management Agency (FEMA), the Federal Communications Commission (FCC) and the National Oceanic and Atmospheric Administration (NOAA). The EAS regulations and standards are governed by the Public Safety and Homeland Security Bureau of the FCC. All broadcast television, broadcast and satellite radio stations, as well as multichannel video programming distributors (MVPDs), are required to participate in the system. Technical concept Messages in the EAS are composed of four parts: a digitally encoded Specific Area Message Encoding (SAME) header, an attention signal, an audio announcement, and a digitally encoded end-of-message marker. The is the most critical part of the EAS design. It contains information about who originated the alert (the president, state or local authorities, the National Weather Service (NOAA/NWS), or the broadcaster), a short, general description of the event (tornado, flood, severe thunderstorm), the areas affected (up to 32 counties or states), the expected duration of the event (in minutes), the date and time it was issued (in UTC), and an identification of the originating station. There are 79 radio stations designated as National Primary Stations in the Primary Entry Point (PEP) System to distribute presidential messages to other broadcast stations and cable systems. The National Emergency Message (formerly known as the Emergency Action Notification) is the notice to broadcasters that the president of the United States or their designee will deliver a message over the EAS via the PEP system. The government has stated that the system would allow a president to speak during a national emergency within 10 minutes. Primary Entry Point stations The National Public Warning System, also known as the Primary Entry Point (PEP) stations, is a network of 77 radio stations that are, in coordination with FEMA, used to originate emergency alert and warning information to the public before, during, and after incidents and disasters. PEP stations are equipped with additional and backup communications equipment and power generators designed to enable them to continue broadcasting information to the public during and after an event. Beginning with WJR Detroit and WLW Cincinnati in 2016, FEMA began the process of constructing transportable studio shelters at the transmitters of 33 PEP stations, which feature broadcasting equipment, emergency provisions, a rest area, and an air filtration system. NPWS project manager Manny Centeno explained that these shelters were designed to "[expand] the survivability of these stations to include an all hazards platform, which means chemical, biological, radiological air protection and protection from electromagnetic pulse." Communication links The FEMA National Radio System (FNARS) "Provides Primary Entry Point service to the Emergency Alert System", and acts as an emergency presidential link into the EAS. The FNARS net control station is located at the Mount Weather Emergency Operations Center. Once an EAN is received by an EAS participant from a PEP station (or any other participant) the message then "daisy chains through the network of participants. "Daisy chains" form when one station receives a message from multiple other stations and the station then forwards that message to multiple other stations. This process creates many redundant paths through which the message may flow increasing the likelihood that the message will be received by all participants and adding to the survivability of the system. Each EAS participant is required to monitor at least two other participants. EAS header Because the header lacks error detection codes, it is repeated three times for redundancy. EAS decoders compare the received headers against one another, looking for an exact match between any two, eliminating most errors which can cause an activation to fail. The decoder then decides whether to ignore the message or to relay it on the air if the message applies to the local area served by the station (following parameters set by the broadcaster). The SAME header bursts are followed by an EAS attention tone, which lasts between 8 and 25 seconds, depending on the originating station. The tone is on a NOAA Weather Radio station. On commercial broadcast stations, a attention signal of 853 Hz and 960 Hz sine waves is used instead, the same signal used by the older Emergency Broadcast System. These tones have become infamous, and can be considered both frightening and annoying by listeners; in fact, the two tones, which form approximately the interval of a just major second at an unusually high pitch, were chosen specifically for their ability to draw attention, due to their unpleasantness on the human ear. The SAME header is equally known for its shrillness, which many have found to be startling. The "two-tone" system is no longer required as of 1998, and is to be used only for audio alerts before EAS messages. Like the EBS, the attention signal is followed by a voice message describing the details of the alert. The message ends with 3 bursts of the AFSK "EOM", or End of Message, which is the text NNNN, preceded each time by the binary 10101011 calibration. IPAWS Under a 2006 executive order issued by George W. Bush, the U.S. government was instructed to create "an effective, reliable, integrated, flexible, and comprehensive" public warning system. This was accomplished via expansions to the aforementioned PEP network, and the development of the Integrated Public Alert and Warning System (IPAWS)—a national aggregator and distributor of alert information using the XML-based Common Alerting Protocol (CAP) and an internet network. IPAWS can be used to distribute alert information to EAS participants, supported mobile phones (Wireless Emergency Alerts), and other platforms. IPAWS also allows the audio portion of an EAS message to utilize higher quality digital audio, rather than needing to carry the audio off-air from the originating station. Under an FCC report and order issued in 2007, EAS participants would be required to migrate to digital equipment supporting CAP within 180 days of the specification's adoption by FEMA. This was originally scheduled for September 30, 2010, but the deadline was later delayed to June 30, 2012 at the request of broadcasters. The FCC has established that IPAWS is not a full substitute for the SAME protocol, as it is vulnerable to situations that may make internet connectivity unavailable. Therefore, as a backup distribution path, broadcasters must also convert CAP messages to SAME headers to enable backwards compatibility with the existing "daisy chain" method of EAS distribution. In December 2021, the FCC issued a notice of proposed rulemaking seeking to prioritize the display of alert audio and text from CAP messages, in order to provide higher quality alert audio, improve parity between the visual display and audio for the benefit of the hearing impaired, and to reduce the amount of technical jargon contained within the visual display. The rules were enacted in September 2022, with a deadline of December 12, 2023, for compliance; the FCC later granted an extension to some broadcasters due to a delay in the release of associated software updates by EAS decoder vendor Sage. Station requirements The FCC requires all broadcast stations and multichannel video programming distributors (MVPD), hereafter "EAS participants", to install and maintain FCC-certified EAS decoders and encoders at their control points or headends. These decoders continuously monitor the signals from other nearby broadcast stations for EAS messages. For reliability, at least two source stations must be monitored, one of which must be a designated local primary. Participants are to retain the latest version of the EAS handbook. EAS participants are required by federal law to relay National Emergency Messages (EAN, formerly Emergency Action Notification) immediately (47 CFR Part 11.54). Broadcasters traditionally have been allowed to opt out of relaying other alerts such as severe weather, and child abduction emergencies (AMBER Alerts) if they so choose. In practice, television stations with local news departments will usually interrupt regularly-scheduled programming during newsworthy situations (such as severe weather) to provide extended coverage. If possible, EAS participants must transmit the audio, and (where applicable) a visual display containing the extended text, from the associated CAP message. EAS participants are required to keep logs of all received messages. Logs may be kept by hand but are usually kept automatically by a small receipt printer in the encoder/decoder unit. Logs may also be kept electronically inside the unit as long as there is access to an external printer or method to transfer them to a computer. System tests All EAS equipment must be tested on a weekly basis. The required weekly test (RWT) consists, at a minimum, of the header and end-of-message tones. Though an RWT does not need an audio or graphic message announcing the test, many stations provide them as a courtesy to the public. In addition, television stations are not required to transmit a video message for weekly tests. RWTs are scheduled by the station on random days and times, (though quite often during late night or early afternoon hours), and are generally not relayed. Required monthly tests (RMTs) are generally originated by the local or state primary station, a state emergency management agency, or by the National Weather Service and are then relayed by broadcast stations and cable channels. RMTs must be performed between 8:30 a.m. and local sunset during odd numbered months, and between local sunset and 8:30 a.m. during even numbered months. Received monthly tests must be retransmitted within 60 minutes of receipt. Additionally, an RMT should not be scheduled or conducted during an event of great importance such as a pre-announced presidential speech, coverage of a national/local election, major local or national news coverage outside regularly scheduled newscast hours or a major national sporting event such as the Super Bowl or World Series, with other events such as the Indianapolis 500 and Olympic Games mentioned in individual EAS state plans. An RWT is not required during a calendar week in which an RMT is scheduled. No testing has to be done during a calendar week in which all parts of the EAS (header burst, attention signal, audio message, and end of message burst) have been legitimately activated. In July 2018, in response to the aftermath of the false missile alert in Hawaii earlier in the year (which was caused by operator error during an internal drill protocol), the FCC announced that it would take steps to promote public awareness and improve efficiency of the system, including requiring safeguards to prevent distribution of false alarms, the ability to authorize "live code" tests—which would simulate the process and response to an actual emergency, and authorizations to use the EAS tones in public service announcements that promote awareness of the system. Nationwide tests On February 3, 2011, the FCC announced plans and procedures for national EAS tests, which involve all television and radio stations connected to the EAS, as well as all cable and satellite services in the United States. They are not relayed on the NOAA Weather Radio (NOAA/NWS) network as it is an initiation-only network and does not receive messages from the PEP network. The national test would transmit and relay an Emergency Action Notification on November 9, 2011 at 2:00 p.m. EST. The Federal Communications Commission found that only half of the participants received the message via Integrated Public Alert and Warning System, and some "failed to receive or retransmit alerts due to erroneous equipment configuration, equipment readiness and upkeep issues, and confusion regarding EAS rules and technical requirements", and that participation among low-power broadcasters was low. Many reported visuals or audio missing, and in the case of DirecTV, hearing Lady Gaga music instead. To reduce viewer confusion, the FCC stated that future national tests would be delivered under the new event code "National Periodic Test" ("NPT"), and list "United States" as its location. A second national test, the first classified as an NPT, occurred on September 28, 2016 as part of National Preparedness Month. A third national periodic test occurred on September 27, 2017. The fourth NPT occurred on October 3, 2018 (delayed from September 20, 2018, due to Hurricane Florence). It was preceded by the first mandatory wireless emergency alert test. The fifth NPT occurred on August 7, 2019, and moved up from past years to prevent it from occurring during the heart of the Atlantic hurricane season. The test focused exclusively on distribution to broadcast outlets and television providers via the primary entry point network to gauge the efficiency of alert distribution in the event the internet cannot be used. The sixth NPT was postponed to 2021 amid the ongoing COVID-19 pandemic "out of consideration for the unusual circumstances and working conditions for those in the broadcast and cable industry." The sixth test occurred on August 11, 2021, at 2:20 pm EDT. This test involved the WEA system alongside television and radio. As of 2022, as part of a clarification and streamlining of terminology used in messages, further NPTs will now be referred to in the test message as a "Nationwide Test of the Emergency Alert System" issued by the United States Government. On May 3, 2022, it was announced that the seventh NPT would not take place during 2022, and instead occur in early 2023. On August 3, 2023, FEMA and the FCC announced that the seventh NPT would occur October 4, 2023 with a backup date of October 11, 2023. The test commenced just before 2:20 pm ET, and consisted of an alert on TV/radio as well as a WEA on all cell phones. Additions and proposals The number of event types in the national system has grown to eighty. At first, all but three of the events (civil emergency message, immediate evacuation, and emergency action notification [national emergency]) were weather-related (such as a tornado warning). Since then, several classes of non-weather emergencies have been added, including, in most states, the AMBER Alert System for child abduction emergencies. In 2016, three additional weather alert codes were authorized for use in relation to hurricane events, including Extreme Wind Warning (EWW), Storm Surge Warning (SSW) and Storm Surge Watch (SSA). In 2004, the FCC issued a Notice of Proposed Rulemaking (NPR) seeking comment on whether EAS in its present form is the most effective mechanism for warning the American public of an emergency and, if not, on how EAS can be improved, such as mandatory text messages to cellphones, regardless of subscription. As noted above, rules implemented by the FCC on July 12, 2007 provisionally endorse incorporating CAP with the SAME protocol. In November 2020, Congress passed the Reliable Emergency Alert Distribution Improvement (READI) Act. First sponsored by Hawaii Senator Brian Schatz in response to the Hawaii false missile alert, it amends the Warning, Alert, and Response Network (WARN) Act to require distribution of wireless alerts issued by the administrator of FEMA, and commands the FCC to establish a means of reporting false alerts, encourage the establishment of State Emergency Communications Committees (SECC) that would meet annually to evaluate their EAS plans, require the repetition of alerts surrounding "emergencies of national significance", and open an inquiry into the feasibility of implementing the EAS on internet-related services. Limitations The EAS can only be used to relay audio messages that preempt all programming; as the intent of an Emergency Action Notification is to serve as a "last-ditch effort to get a message out if the president cannot get to the media", it can easily be made redundant by the immediate and constant coverage that major weather events and other newsworthy situations—such as, most prominently, the September 11 attacks in 2001—receive from television broadcasters and news channels. Following the attacks, then-FCC chairman Michael K. Powell cited "the ubiquitous media environment" as justification for not using the EAS in their immediate aftermath. Glenn Collins of The New York Times acknowledged these limitations, noting that "no president has ever used the current [EAS] system or its technical predecessors in the last 50 years, despite the Soviet missile crisis, a presidential assassination, the Oklahoma City bombing, major earthquakes and three recent high-alert terrorist warnings", and that using it would have actually hindered the availability of live coverage from media outlets. Following the tornado outbreak of March 3, 2019, Birmingham, Alabama NWS meteorologist Kevin Laws told CNN that he, personally, wished that alerts could be updated in real-time in order to reflect the unpredictable nature of weather events, noting that the storm system's unexpected change in trajectory towards Lee County resulted in only a nine-minute warning (the resulting tornado would kill 23 people). The trend of cord cutting has led to concerns that viewers' lessened use of broadcast media in favor of streaming video services would inhibit their ability to receive emergency information (notwithstanding availability of alerts on mobile phones). The READI Act called for an inquiry into the distribution of alerts via internet platforms. Incidents False alarms On February 20th, 1971, an Emergency Action Notification (now National Emergency Message) was accidentally sent out because the wrong message was played. This originally was supposed to be a simple test of the then-used Emergency Broadcast System. They announced that this was a false alarm. The amount of stations affected is not entirely known, however recordings of WOWO-AM and WCCO-AM activating the EAN are available to be listened to online. The WOWO recording features the station playing standby music while they try to get more information, which would waste valuable time during an actual emergency, while in the WCCO recording, the station immediately realized the wrong tape had been played, and listeners were told to disregard; the recording also contains the usage of the terms "CONELRAD Advisory" (CONELRAD being the Emergency Broadcast System's predecessor), and "Emergency Alert System" (the term being used erroneously, as the actual EAS was still 26 years away). On April 21, 1997, several television and radio stations in Florida, Hawaii, Louisiana, and Ohio mistakenly received a false Emergency Action Notification. Early indications pointed to a human error at the National Emergency Coordination Center in Virginia that routed a test requested by a relay for the Chicago Metropolitan Area to test out one radio station's then-new EAS equipment as part of the EBS/EAS transition. On February 1, 2005 in Connecticut, an alert was mistakenly issued calling for the immediate evacuation of the entire state. The alert contained no specific detail on why it had been issued. The message was broadcast due to operator error while conducting an unannounced, but scheduled statewide test. A study conducted following the incident reported that at least 11% of residents actually saw the warning live, and that 63% of those surveyed were "a little or not at all concerned"—citing a suspicious lack of detail in the message, which a legitimate alert would include. Only 1% of those surveyed actually attempted to leave the state. Connecticut State Police did not receive any calls related to the incident. On June 26, 2007 at 7:35 a.m. CDT, an Emergency Action Notification was accidentally issued in Illinois, when a new satellite receiver at the state's EOC was accidentally connected to a live system before final internal testing of the new delivery path had been completed. The alert was followed by dead air, and then audio from designated station 720 WGN in Chicago being simulcast across almost every television and radio station in the Chicago area and throughout much of Illinois. A confused Spike O'Dell, host of the station's morning show at the time, was heard on-air wondering "what that beeping was all about". On May 19, 2010, NOAA Weather Radio and CSEPP tone alert radios in the Hermiston, Oregon area, near the Umatilla Chemical Depot, were activated with an EAS alert shortly after 5 p.m. The message transmitted was for a severe thunderstorm warning, issued by the National Weather Service in Pendleton, but the transmission broadcast instead was a long period of silence, followed by a few words in Spanish. Umatilla County Emergency Management has stressed there was no emergency at the depot. On September 3, 2016, in the wake of Tropical Storm Hermine, an alert was displayed on television calling for the immediate evacuation of the entirety of Suffolk County, abruptly ending with the incomplete sentence "This is an emergency message from". About 15 minutes after the original message was sent, the alert was re-issued with an addendum clarifying that the alert was actually calling for a voluntary evacuation of Fire Island—a barrier island of Long Island. Officials cited an error in the county's Code Red system; while the correct message was entered into the system, an error processing an abbreviated message for television resulted in the error. On May 23, 2017, at around 8:55 p.m. EDT, a Nuclear Power Plant Warning was issued for the Hope Creek and Salem Nuclear Power Plants. The alert was issued for Salem and Cumberland counties in New Jersey. In a statement by the New Jersey State Police, it was a test message, intended for a small group of emergency management personnel who were participating in the test. Due to a coding error, the message was publicly broadcast. This would happen again, in July 2022. On August 15, 2017 at approximately 12:25 a.m. ChST, Guam stations KTWG and KSTO transmitted a civil danger warning for the island; Guam Homeland Security described the message, which interrupted programming on the stations, and was received on television by some viewers, as being an "unauthorized test" of the EAS. The incident's impact was strengthened, as North Korea had threatened the launch of ballistic missiles towards Guam only a few days beforehand. Numerous calls to 911 operators and the Department of Homeland Security were made following the broadcast. On January 13, 2018 at approximately 8:07 a.m. HST, the Hawaii Emergency Management Agency (HI-EMA) mistakenly issued an emergency alert warning of a ballistic missile inbound threatening the region, which was claimed to be not a drill. 38 minutes later, it was announced by HI-EMA and the Honolulu Police Department that the alert was a false alarm. The incident came amidst heightened concern over the possibility that Hawaii could be targeted by North Korean missiles (in December 2017, Hawaii tested its missile sirens for the first time since the Cold War). HI-EMA administrator Vern Miyagi stated that the incident was a "mistake made during a standard procedure at the change over of a shift". On August 31, 2022, amid wildfires, an immediate evacuation notice was mistakenly issued by the Los Angeles County Office of Emergency Management for Los Angeles, the Eastern North Pacific Ocean, and Port Conception to Guadalupe; the alert text repeatedly listed "Eastern North Pacific Ocean" or "Eastern North Pacific" twelve different times. The Ventura County Sheriff's Office stated that the alert had been issued in error. Cybersecurity breaches EAS equipment has been the subject of various cyberattacks, caused primarily by participants using insecure or factory default passwords on their encoders and decoders, and outdated software containing unpatched vulnerabilities. On multiple occasions, federal government departments have warned that failure to employ secure passwords and keep software updated made EAS equipment vulnerable to such attacks, which could result in disruptions such as false alerts. In February 2013, the EAS equipment of several stations in Great Falls, Montana and Marquette, Michigan were breached to play a false alert allegedly warning of a zombie apocalypse, using the lines "Civil authorities in your area have reported that the bodies of the dead are rising from the graves and attacking the living". It was identified that the attack had come from an "overseas" source. Furthermore, the broadcasters had neglected to change the factory default logins or passwords on their equipment. Because of this, the FCC, FEMA, equipment manufacturers, as well as trade groups, including the Michigan Association of Broadcasters, urged broadcasters to change their passwords and to recheck their security measures. In a related incident, WIZM-FM in La Crosse, Wisconsin accidentally triggered the EAS on television station WKBT-DT by airing a recording of the false message during its morning show. The relayed audio included the hosts' reactions and laughter to the clip. On February 28, 2017, WZZY in Winchester, Indiana was hijacked in a nearly identical manner, playing the same "dead bodies" audio from the February 2013 incidents. The incident prompted a public response from the Randolph County Sheriff's Department clarifying that there was no actual emergency. In January 2020, Security Ledger published an investigation finding that at least 50 EAS decoders by Digital Alert Systems had not been patched for a security vulnerability (use of a shared SSH key) found by IOActive in 2013. On February 20, 2020, the EAS equipment of Washington-based provider Wave Broadband was hijacked, causing approximately 3,000 customers in Jefferson County to receive several false alerts (including a "Radiological Hazard Warning"), which contained irrelevant and comedic messages (including one suggesting that the provider change its passwords) and alert audio referencing internet memes and Twitch streamer Vinesauce (who was unaffiliated with this breach). On March 2 and 3, 2020, a legitimate Required Monthly Test was displayed with a message ("AIGHT IM DONE U CAN REST NOW. MR GERDE WAS HERE") that had also appeared in the hijack: a company official stated that this was a remnant of the attack that had not yet been removed. Tone usage outside of alerts To protect the integrity of the system, and prevent false activations, the FCC prohibits the use of actual or simulated EAS/WEA tones and attention signals outside of genuine alerts, tests, or authorized public service announcements, especially when they are used "to capture audience attention during advertisements; dramatic, entertainment, and educational programs" (even if the footage is documentation of an event where an actual alert was issued). Broadcasters who misuse the tones may be sanctioned (including being required to partake in compliance measures) and fined. Tones from the EAS were used in the trailer for the 2013 film Olympus Has Fallen; cable providers were fined $1.9 million by the Federal Communications Commission (FCC) on March 3, 2014 for misuse of EAS tones. An event similar to this previously occurred in November 2013, when TBS was fined $25,000 for the use of EAS tones in a Conan advertisement. During the October 24, 2014 episode of the syndicated radio show The Bobby Bones Show, host Bobby Bones played audio from the 2011 national test as part of a rant about a genuine test from Nashville's Fox affiliate, WZTV, that interrupted Game 2 of the 2014 World Series on October 22. The errant Emergency Action Notification was relayed to some broadcasters and cable systems—particularly those not configured to reject EAN messages that did not match the current date. On May 19, 2015, iHeartMedia, who distributes the show and owns its flagship station WSIX-FM, was fined $1 million for the incident. The company was also ordered to implement a three-year compliance plan to avoid any further incidents, including removing all EAS tones or similar-sounding noises from its audio production libraries. From August 4 to 6, 2016, Tegna, Inc.-owned NBC affiliate WTLV in Jacksonville, Florida aired an ad several times during NBC's primetime coverage of the 2016 Summer Olympics produced by the marketing department of the National Football League's Jacksonville Jaguars featuring out-of-sequence EAS tones over Jaguars training camp footage and a voiceover noting "this is not a test, this is an emergency broadcast transmission...seek shelter immediately", along with the on-screen text "the storm is coming". The ad aired four times before station compliance authorities pulled the advertisement after the local news industry blog FTVLive criticized the station for carrying it, especially during the peak of the Atlantic hurricane season. FTVLive's piece would be noted by the FCC in their decision against WTLV rendered on May 30, 2017, when it was given a $55,000 fine for carrying the offending Jaguars ad. The FCC issued several fines relating to EAS tone usage in August 2019, including ABC being fined $395,000 for using wireless emergency alert tones multiple times during a Jimmy Kimmel Live sketch, AMC Networks being fined $104,000 for using the tones in The Walking Dead episode "Omega", Discovery Inc. being fined $68,000 for including footage of an actual WEA activation during a Lone Star Law episode filmed during Hurricane Harvey, and Meruelo Group was fined $61,000 for including an EAS-like tone during a radio advertisement for KDAY and KDEY-FM's morning show. On September 9, 2019, the FCC proposed a $272,000 fine against CBS for using simulated EAS tones in the Young Sheldon episode, "A Mother, A Child, and a Blue Man's Backside". CBS defended the statement, saying that the tones' usage was a "dramatic portrayal", and that it was an "integral part of the storyline about a family's visceral reaction to a life-threatening emergency". The show's sound editors achieved the effect by downloading EAS tones from YouTube and modifying the volume of the tone. CBS passed the edited tone through three quality rooms equipped with EAS decoders and pre-screened the episode to make sure it did not trigger an actual alert. Also, the show's dialogue was used to obscure some elements of the alert. However, the FCC insisted that the modified tone still sounded like a normal EAS tone, despite the volume being lowered and the tone being cut short in duration. It also said that the pre-screening process did not excuse an unauthorized usage of the EAS tones. On April 7, 2020, the FCC proposed a $20,000 fine against New York City radio station WNEW-FM, for using the attention signal during its morning show on October 3, 2018 as part of a skit discussing the National Periodic Test held later that day. In January 2023, the FCC proposed a $504,000 fine against Fox Corporation for using EAS tones during a promo broadcast on Fox NFL Sunday in November 2021. On October 17, 2024, the FCC proposed a $146,976 fine against ESPN for misusing the EAS tones during a promotional segment for the start of the 2023–24 NBA season that aired during the week of October 20–24, 2023. In an opposite move, in 2013 the FCC granted a one-year waiver for a PSA pertaining to the Wireless Emergency Alerts system, with assurance that the tones used in the PSA contained a different set of codes designed not to activate EAS receivers. Testing errors On October 19, 2008, KWVE-FM in San Clemente, California accidentally initiated a Required Monthly Test when it meant to conduct a Required Weekly Test. Furthermore, an operator aborted the test mid-way through the broadcast (failing to broadcast the end-of-message tone), causing all area outlets to broadcast KWVE-FM's programming until those stations took their equipment offline. On September 15, 2009, the FCC fined the station's owner, Calvary Chapel Costa Mesa, $5,000. After the fine was levied, various state broadcast associations in the United States submitted joint letters to the FCC, protesting against the fine, saying that the commission could have handled the matter better. On November 13, 2009, the FCC rescinded its fine against KWVE-FM, but had still admonished the station for broadcasting an unauthorized RMT, as well as omitting the code to end the test. On September 28, 2016, an emergency alert was broadcast by WKTV in Utica, New York that contained a Hazardous Materials Warning for the entire United States. The message contained a non-sequitur quote from the Dr. Seuss book Green Eggs and Ham, "Would you. Could you. On a train?" WKTV apologized and stated that the alert was "an automated test [from FEMA] which was not intended for public display." A FEMA representative stated that its decoders had been mistakenly "configured to poll a test and development message aggregator instead of or in addition to the production message aggregator", with the test server having used the Green Eggs and Ham quote as placeholder text. The error was also connected to conspiracy theories surrounding a train crash in New Jersey that occurred the next day, which claimed that the alert was a forewarning of the incident. On September 21, 2017, a technical glitch in another scheduled test by KWVE caused the end-of-message tone to be omitted, causing regional participants (particularly Charter and Cox Cable systems in Orange County) to simulcast a portion of Chuck Swindoll's Insight for Living program. Some viewers speculated that the system had been hijacked, as the portion of the program relayed (where Swindoll was discussing the Bible verse 2 Timothy 3:1, and stated, "Realize this, extremely violent times will come.") could be insinuated out of context as discussing an impending apocalypse. See also Alert Ready (Canada) Cell Broadcast Digital Emergency Alert System (DEAS) Earthquake Early Warning (Japan) Emergency population warning Emergency Public Warning System Flash Flood Guidance Systems HANDEL (UK's former National Attack Warning System) ICANN's TEAC (Transfer Emergency Action Contact) channel in cases of URL hijacking J-Alert (Japan) Local Access Alert Mexican Seismic Alert System (Mexico's Earthquake Early Warning System, which also employs Specific Area Message Encoding technology) National Severe Weather Warning Service National Warning System NOAA Weather Radio Nuclear football Nuclear MASINT Radio Amateur Civil Emergency Service ShakeAlert Specific Area Message Encoding Standard Emergency Warning Signal (Australia) Wartime Broadcasting Service Weatheradio Canada Wireless Emergency Alerts (WEA) References External links Consumer facts page FCC notice regarding possible improvements 1997 establishments in the United States Broadcasting in the United States Cold War history of the United States Disaster preparedness in the United States Emergency population warning systems Mass media companies established in 1997 United States civil defense United States communications regulation United States warning systems Nuclear warfare Tornado outbreaks
Emergency Alert System
[ "Chemistry", "Technology" ]
7,538
[ "United States warning systems", "Emergency population warning systems", "Nuclear warfare", "Warning systems", "Radioactivity" ]
175,519
https://en.wikipedia.org/wiki/Escheat
Escheat (from the Latin excidere for "fall away") is a common law doctrine that transfers the real property of a person who has died without heirs to the crown or state. It serves to ensure that property is not left in "limbo" without recognized ownership. It originally applied to a number of situations where a legal interest in land was destroyed by operation of law, so that the ownership of the land reverted to the immediately superior feudal lord. Etymology The term "escheat" derives ultimately from the Latin ex-cadere, to "fall-out", via mediaeval French escheoir. The sense is of a feudal estate in land falling-out of the possession by a tenant into the possession of the lord. Origins in feudalism In feudal England, escheat referred to the situation where the tenant of a fee (or "fief") died without an heir or committed a felony. In the case of such demise of a tenant-in-chief, the fee reverted to the King's demesne permanently, when it became once again a mere tenantless plot of land, but could be re-created as a fee by enfeoffment to another of the king's followers. Where the deceased had been subinfeudated by a tenant-in-chief, the fee reverted temporarily to the crown for one year and one day by right of primer seisin after which it escheated to the over-lord who had granted it to the deceased by enfeoffment. From the time of Henry III, the monarchy took particular interest in escheat as a source of revenue. Background At the Norman Conquest of England in 1066, all the land of England was claimed as the personal possession of William the Conqueror under allodial title. The monarch thus became the sole "owner" of all the land in the kingdom, a position which persists to the present day. He then granted it out to his favoured followers, who thereby became tenants-in-chief, under various contracts of feudal land tenure. Such tenures, even the highest one of "feudal barony", never conferred ownership of land but merely ownership of rights over it, that is to say ownership of an estate in land. Such persons are therefore correctly termed "land-holders" or "tenants" (from Latin teneo to hold), not owners. If held freely, that is to say by freehold, such holdings were heritable by the holder's legal heir. On the payment of a premium termed feudal relief to the treasury, such heir was entitled to demand re-enfeoffment by the king with the fee concerned. Where no legal heir existed, the logic of the situation was that the fief had ceased to exist as a legal entity, since being tenantless no one was living who had been enfeoffed with the land, and the land was thus technically owned by either the crown or the immediate overlord (where the fee had been subinfeudated by the tenant-in-chief to a mesne lord, and perhaps the process of subinfeudation had been continued by a lower series of mesne-lords) as ultimus haeres. Logically therefore it was in the occupation of the crown alone, that is to say in the royal demesne. This was the basic operation of an escheat ('excadere'), a failure of heirs. Escheat could also take place if a tenant was outlawed or convicted of a felony, when the King could exercise the ancient right of wasting the criminal's land for a year and a day, after which the land would revert to the overlord. (However, one guilty of treason (rather than mere felony) forfeited all lands to the King. John and his heirs frequently insisted on seizing as terrae Normannorum (i.e. "lands of the Normans") the English lands of those lords with holdings in Normandy who preferred to be Normans rather than Englishmen, when the victories of Philip II of France forced them to make a proclamation of allegiance to France.) Since disavowal of a feudal bond was a felony, lords could escheat land from those who refused to perform their feudal services. On the other hand, there were also tenants who were merely sluggish in performing their duties, while not being outright rebellious against the lord. Remedies in the courts against this sort of thing, even in Bracton's day, were available, but were considered laborious and were frequently ineffectual in compelling the desired performance. The commonest mechanism was distraint, also known as distress (districtio), whereby the lord would seize chattels or goods belonging to the tenant, to hold until performance was achieved. This practice had been addressed in the 1267 Statute of Marlborough. Even so, it remained the most common extrajudicial method applied by overlords at the time of Quia Emptores. Thus, under English common law, there were two main ways an escheat could happen: A person's lands escheated to the immediate overlord if he was convicted of a felony (but not treason, in that event the land was forfeited to the Crown). If the person was executed for felony, his heirs were attainted, i.e. were ineligible to inherit. In most common-law jurisdictions, this type of escheat has been abolished outright, for example in the United States under Article 3 § 3 of the United States Constitution, which states that attainders for treason do not give rise to posthumous forfeiture, or "corruption of blood". If a person had no heir to receive his lands under his will, or under the laws of intestacy, then any land he owned at death would escheat. This rule has been replaced in most common-law jurisdictions by bona vacantia or a similar concept. Procedure From the 12th century onward, the Crown appointed escheators to manage escheats and report to the Exchequer, with one escheator per county established by the middle of the 14th century. Upon the death of a tenant-in-chief, the escheator would be instructed by a writ of diem clausit extremum ("he has closed his last day", i.e. he is dead) issued by the king's chancery, to empanel a jury to hold an "inquisition post mortem" to ascertain who the legal heir was, if any, and what was the extent of the land held. Thus it would be revealed whether the king had any rights to the land. It was also important for the king to know who the heir was, and to assess his personal qualities, since he would thenceforth form a constituent part of the royal army, if he held under military tenure. If there was any doubt, the escheator would seize the land and refer the case to the king's court where it would be settled, ensuring that not one day's revenue would be lost. This would be a source of concern with land-holders when there were delays from the court. Current operation Most common-law jurisdictions have abolished the concept of feudal land tenure of property, and so the concept of escheat has lost something of its meaning. In England and Wales, the possibility of escheat of a deceased person's property to the feudal overlord was abolished by the Administration of Estates Act 1925; however, the concept of bona vacantia means that the Crown (or Duchy of Cornwall or Duchy of Lancaster) can still receive such property if no-one else can be found who is eligible to inherit it. The term is often now applied to the transfer of the title to a person's property to the state when the person dies intestate without any other person capable of taking the property as heir. For example, a common-law jurisdiction's intestacy statute might provide that when someone dies without a will, and is not survived by a spouse, descendants, parents, grandparents, descendants of parents, children or grandchildren of grandparents, or great-grandchildren of grandparents, then the person's estate will escheat to the state. Similarly, under Napoleonic law, if someone dies intestate without natural heirs then, after all creditors are paid, any remaining real and personal goods are inherited by the State. In some jurisdictions, escheat can also occur when an entity, typically a bank, credit union or other financial institution, holds money or property which appears to be unclaimed, for instance due to a lack of activity on the account by way of deposits, withdrawals or any other transactions for a lengthy time in a cash account. In many jurisdictions, if the owner cannot be located, such property can be revocably escheated to the state. In commerce, it is the process of reassigning legal title in unclaimed or abandoned payroll checks, insurance payouts, or stocks and shares whose owners cannot be traced, to a state authority (in the United States). A company is required to file unclaimed property reports with its state annually and, in some jurisdictions, to make a good-faith effort to find the owners of their dormant accounts. The escheating criteria are set by individual state regulations. England and Wales Bankruptcies and liquidations Escheat can still occur in England and Wales, if a person is made bankrupt or a corporation is liquidated. Usually this means that all the property held by that person is 'vested in' (transferred to) the Official Receiver or Trustee in Bankruptcy. However, it is open to the Receiver or Trustee to refuse to accept that property by disclaiming it. It is relatively common for a trustee in bankruptcy to disclaim freehold property which may give rise to a liability, for example the common parts of a block of flats owned by the bankrupt would ordinarily pass to the trustee to be realised in order to pay his debts, but the property may give the landlord an obligation to spend money for the benefit of lessees of the flats. The bankruptcy of the original owner means that the freehold is no longer the bankrupt's legal property, and the disclaimer destroys the freehold estate, so that the land ceases to be owned by anyone and effectively escheats to become land held by the Crown in demesne. This situation affects a few hundred properties each year. Although such escheated property is owned by the Crown, it is not part of the Crown Estate, unless the Crown (through the Crown Estate Commissioners) 'completes' the escheat, by taking steps to exert rights as owner. However, usually, in the example given above, the tenants of the flats, or their mortgagees would exercise their rights given by the Insolvency Act 1986 to have the freehold property transferred to them. This is the main difference between escheat and bona vacantia, as in the latter, a grant takes place automatically, with no need to 'complete' the transaction. Registration of Crown land One consequence of the Land Registration Act 1925 was that only estates in land (freehold or leasehold) could be registered. Crown land, i.e., land held directly by the Crown – also known as property in the royal demesne – is not held under any residual feudal tenure (the Crown has no historical overlord other than, for brief periods, the papacy), and there is therefore no estate to register. This had the consequence that freeholds which escheated to the Crown ceased to be registrable. This created a slow drain of property out of registration, amounting to some hundreds of freehold titles in each year. The problem was noted by the Law Commission in their report "Land Registration for the Twenty-First Century". The Land Registration Act 2002 was passed in response to that report. It provides that land held in demesne by the Crown may be registered. United States Transfer agents and escheatment Escheatment is the process of returning lost or unclaimed property to the government of a state, for safekeeping until the owner is identified. Geographic jurisdiction of the state is determined by the last known address of the original owner. Each state has laws regulating escheatment, with holding periods typically ranging around five years. The legal principle behind escheatment is that all property has a legally recognized owner. Therefore, if the original owner cannot be found within a specified time, the government is presumed to be the owner. Escheats are performed on a revocable basis. Thus, if property has escheated to a state but the original owner subsequently is found, escheatment is revoked and ownership of the property reverts to that original owner. Lost shareholders According to SEC Rule 17 CFR 240.17f-1: Transfer agents are obligated by the SEC to report to the commission (specifically to its designee; the SEC's Securities Information System) anytime a certificate is known to be lost or missing for at least two days. Transfer Agents must search for the holder's SSN or EIN utilizing an information database system, or if not available, exercise their best effort to match the holder's name and address through these systems. All transfer agents must report all lost or missing certificates/shareholders on their own annual filings. See also Bona vacantia Breakage Doctrine of lapse History of the English fiscal system Intestacy Quia Emptores Sources S. T. Gibson, "The Escheatries, 1327–1341", English Historical Review, 36(1921). John Bean, The Decline of English Feudalism, 1215–1540, 1968. References Common law Feudalism in England Property law Real property law Time in government
Escheat
[ "Physics" ]
2,848
[ "Spacetime", "Physical quantities", "Time in government", "Time" ]
175,549
https://en.wikipedia.org/wiki/Dilbert%20principle
The Dilbert principle is a satirical concept of management developed by Scott Adams, creator of the comic strip Dilbert, which states that companies tend to promote incompetent employees to management to minimize their ability to harm productivity. The Dilbert principle is inspired by the Peter principle, which is that employees are promoted based on success until they attain their "level of incompetence" and are no longer successful. By the Dilbert principle, employees who were never competent are promoted to management to limit the damage they can do. Adams first explained the principle in a 1995 Wall Street Journal article, and elaborated upon it in his humorous 1996 book The Dilbert Principle. Definition In the Dilbert comic strip of February 5, 1995, Dogbert says that "leadership is nature's way of removing morons from the productive flow". Adams himself explained, I wrote The Dilbert Principle around the concept that in many cases the least competent, least smart people are promoted, simply because they’re the ones you don't want doing actual work. You want them ordering the doughnuts and yelling at people for not doing their assignments—you know, the easy work. Your heart surgeons and your computer programmers—your smart people—aren't in management. That principle was literally happening everywhere. Adams explained the principle in a 1995 Wall Street Journal article. Adams then elaborated his study of the Dilbert principle in his 1996 book The Dilbert Principle, which is required or recommended reading at some management and business programs. In the book, Adams writes that, in terms of effectiveness, use of the Dilbert principle is akin to a band of gorillas choosing an alpha-squirrel to manage them by an incredibly convoluted process. The book has sold more than a million copies and was on the New York Times bestseller list for 43 weeks. Comparative principles The Dilbert principle can be compared to the Peter principle. As opposed to the Dilbert principle, the Peter principle assumes that people are promoted because they are competent, and that the tasks higher in the hierarchy require skills or talents they do not possess. It concludes that due to this, a competent employee will eventually be promoted to, and then likely remain at, a job at which he or she is incompetent. In his book, The Peter Principle, Laurence J. Peter explains "percussive sublimation", the act of "kicking a person upstairs" (i.e., promoting him to management) to reduce his interference with productive employees. The Dilbert principle, by contrast, assumes that hierarchy just serves as a means for removing the incompetent to "higher" positions where they will be unable to cause damage to the workflow, assuming that the upper echelons of an organization have little relevance to its actual production, and that the majority of real, productive work in a company is done by people who rank lower. Unlike the Peter principle, the promoted individuals were not particularly good at any job they previously had, so awarding them a supervisory position is a way to remove them from the workforce without actually dismissing them, rather than a reward for meritorious service. An earlier formulation of this effect was known as Putt's Law (1981), credited to the pseudonymous author Archibald Putt ("Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand."). See also References Further reading The Dilbert Principle by Scott Adams, HarperBusiness (1996) . 1996 books Organizational theory Incompetence Business books Dilbert books
Dilbert principle
[ "Biology" ]
727
[ "Incompetence", "Behavior", "Human behavior" ]
175,560
https://en.wikipedia.org/wiki/Ciphertext
In cryptography, ciphertext or cyphertext is the result of encryption performed on plaintext using an algorithm, called a cipher. Ciphertext is also known as encrypted or encoded information because it contains a form of the original plaintext that is unreadable by a human or computer without the proper cipher to decrypt it. This process prevents the loss of sensitive information via hacking. Decryption, the inverse of encryption, is the process of turning ciphertext into readable plaintext. Ciphertext is not to be confused with codetext because the latter is a result of a code, not a cipher. Conceptual underpinnings Let be the plaintext message that Alice wants to secretly transmit to Bob and let be the encryption cipher, where is a cryptographic key. Alice must first transform the plaintext into ciphertext, , in order to securely send the message to Bob, as follows: In a symmetric-key system, Bob knows Alice's encryption key. Once the message is encrypted, Alice can safely transmit it to Bob (assuming no one else knows the key). In order to read Alice's message, Bob must decrypt the ciphertext using which is known as the decryption cipher, Alternatively, in a non-symmetric key system, everyone, not just Alice and Bob, knows the encryption key; but the decryption key cannot be inferred from the encryption key. Only Bob knows the decryption key and decryption proceeds as Types of ciphers The history of cryptography began thousands of years ago. Cryptography uses a variety of different types of encryption. Earlier algorithms were performed by hand and are substantially different from modern algorithms, which are generally executed by a machine. Historical ciphers Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include: Substitution cipher: the units of plaintext are replaced with ciphertext (e.g., Caesar cipher and one-time pad) Polyalphabetic substitution cipher: a substitution cipher using multiple substitution alphabets (e.g., Vigenère cipher and Enigma machine) Polygraphic substitution cipher: the unit of substitution is a sequence of two or more letters rather than just one (e.g., Playfair cipher) Transposition cipher: the ciphertext is a permutation of the plaintext (e.g., rail fence cipher) Historical ciphers are not generally used as a standalone encryption technique because they are quite easy to crack. Many of the classical ciphers, with the exception of the one-time pad, can be cracked using brute force. Modern ciphers Modern ciphers are more secure than classical ciphers and are designed to withstand a wide range of attacks. An attacker should not be able to find the key used in a modern cipher, even if they know any specifics about the plaintext and its corresponding ciphertext. Modern encryption methods can be divided into the following categories: Private-key cryptography (symmetric key algorithm): one shared key is used for encryption and decryption Public-key cryptography (asymmetric key algorithm): two different keys are used for encryption and decryption In a symmetric key algorithm (e.g., DES, AES), the sender and receiver have a shared key established in advance: the sender uses the shared key to perform encryption; the receiver uses the shared key to perform decryption. Symmetric key algorithms can either be block ciphers or stream ciphers. Block ciphers operate on fixed-length groups of bits, called blocks, with an unvarying transformation. Stream ciphers encrypt plaintext digits one at a time on a continuous stream of data, with the transformation of successive digits varying during the encryption process. In an asymmetric key algorithm (e.g., RSA), there are two different keys: a public key and a private key. The public key is published, thereby allowing any sender to perform encryption. The private key is kept secret by the receiver, thereby allowing only the receiver to correctly perform decryption. Cryptanalysis Cryptanalysis (also referred to as codebreaking or cracking the code) is the study of applying various methodologies to obtain the meaning of encrypted information, without having access to the cipher required to correctly decrypt the information. This typically involves gaining an understanding of the system design and determining the cipher. Cryptanalysts can follow one or more attack models to crack a cipher, depending upon what information is available and the type of cipher being analyzed. Ciphertext is generally the most easily obtained part of a cryptosystem and therefore is an important part of cryptanalysis. Attack models Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or code texts. This is the weakest attack model because the cryptanalyst has limited information. Modern ciphers rarely fail under this attack. Known-plaintext: the attacker has a set of ciphertexts to which they know the corresponding plaintext Chosen-plaintext attack: the attacker can obtain the ciphertexts corresponding to an arbitrary set of plaintexts of their own choosing Batch chosen-plaintext attack: where the cryptanalyst chooses all plaintexts before any of them are encrypted. This is often the meaning of an unqualified use of "chosen-plaintext attack". Adaptive chosen-plaintext attack: where the cryptanalyst makes a series of interactive queries, choosing subsequent plaintexts based on the information from the previous encryptions. Chosen-ciphertext attack: the attacker can obtain the plaintexts corresponding to an arbitrary set of ciphertexts of their own choosing Adaptive chosen-ciphertext attack Indifferent chosen-ciphertext attack Related-key attack: similar to a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known (e.g., two keys that differ in the one bit). Famous ciphertexts The Babington Plot ciphers The Shugborough inscription The Zimmermann Telegram The Magic Words are Squeamish Ossifrage The cryptogram in "The Gold-Bug" Beale ciphers Kryptos Zodiac Killer ciphers See also Books on cryptography Cryptographic hash function Frequency analysis RED/BLACK concept :Category:Undeciphered historical codes and ciphers References Further reading Helen Fouché Gaines, “Cryptanalysis”, 1939, Dover. David Kahn, The Codebreakers - The Story of Secret Writing () (1967) Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1968. Cryptography
Ciphertext
[ "Mathematics", "Engineering" ]
1,382
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
175,571
https://en.wikipedia.org/wiki/List%20of%20amusement%20rides
Amusement rides, sometimes called carnival rides, are mechanical devices or structures that move people to create fun and enjoyment. Rides are often perceived by many as being scary or more dangerous than they actually are. This could be due to the design, having acrophobia, or from hearing about accidents involving rides that are similar. For some, the adrenaline associated with riding amusement rides is part of the experience. They are common at most annual events such as fairs, traveling carnivals, and circuses around the world. Sometimes music festivals and concerts also host amusement park rides. Types of rides Flat rides are usually those that move their passengers on a plane generally parallel to the ground, such as rides that spin around a vertical axis, like carousels and twists; and ground-level rides such as bumper cars. In gravity rides, gravity is responsible for all or some of the movement where any vertical movement is not about a fixed point, such as roller coasters, water slides and drop towers. Vertical rides usually move their passengers in a vertical plane and around a fixed point, such as Ferris wheels, Enterprise and Skydiver. As of the 2010s, some rides employ virtual reality. Specific themes Dark ride Bumper cars Haunted house Funhouse Pendulum ride Rollercoaster Ferris wheel List of amusement rides Literature Florian Dering: Volksbelustigungen. Eine bildreiche Kulturgeschichte von den Fahr-, Belustigungs- und Geschicklichkeitsgeschäften der Schausteller vom achtzehnten Jahrhundert bis zur Gegenwart. Greno, Nördlingen 1986, . Karl Ruisinger: Kirmes Special. Karussells 1950er und 1960er Jahre (= Kirmes Special 1). Gemi Verlag, Reichertshausen 2005, . Sacha-Roger Szabo: Rausch und Rummel. Attraktionen auf Jahrmärkten und in Vergnügungsparks. Eine soziologische Kulturgeschichte. transcript-Verlag, Bielefeld 2006, (Zugleich: Freiburg, Univ., Diss., 2006). References External links Ride-Index.de − Datenbank der Reisenden Fahrgeschäfte in Deutschland (in German) History of Fairs - Fairground Rides - Modern Rides :: National Fairground Archive U.S. patents for roller coasters and related rides Entertainment lists
List of amusement rides
[ "Physics", "Technology" ]
511
[ "Physical systems", "Machines", "Amusement rides" ]
175,590
https://en.wikipedia.org/wiki/Moral%20hazard
In economics, a moral hazard is a situation where an economic actor has an incentive to increase its exposure to risk because it does not bear the full costs associated with that risk, should things go wrong. For example, when a corporation is insured, it may take on higher risk knowing that its insurance will pay the associated costs. A moral hazard may occur where the actions of the risk-taking party change to the detriment of the cost-bearing party after a financial transaction has taken place. Moral hazard can occur under a type of information asymmetry where the risk-taking party to a transaction knows more about its intentions than the party paying the consequences of the risk and has a tendency or incentive to take on too much risk from the perspective of the party with less information. One example is a principal–agent approach (also called agency theory), where one party, called an agent, acts on behalf of another party, called the principal. However, a principal–agent problem can occur when there is a conflict of interest between the agent and principal. If the agent has more information about his or her actions or intentions than the principal then the agent may have an incentive to act too riskily (from the viewpoint of the principal) if the interests of the agent and the principal are not aligned. History According to research by Dembe and Boden, the term dates back to the 17th century and was widely used by English insurance companies by the late 19th century. Early usage of the term carried negative connotations, implying fraud or immoral behavior (usually on the part of an insured party). Dembe and Boden point out, however, that prominent mathematicians who studied decision-making in the 18th century used "moral" to mean "subjective", which may cloud the true ethical significance in the term. The concept of moral hazard was the subject of renewed study by economists in the 1960s, beginning with economist Ken Arrow, and did not imply immoral behavior or fraud. Economists use this term to describe inefficiencies that can occur when risks are displaced or cannot be fully evaluated, rather than a description of the ethics or morals of the involved parties. Rowell and Connelly offer a detailed description of the genesis of the term moral hazard, by identifying salient changes in economic thought, which are identified within the medieval theological and probability literature. Due to the different approaches taken by economics and philosophy in interpreting the concept of “moral hazard,” there are significant differences in their understanding of its underlying causes. In economics, “moral hazard” is often attributed to the malignant development of utilitarianism. In contrast, philosophy and ethics view “moral hazard” from a broader perspective that includes the moral behaviour of individuals and society as a whole. The root cause of “moral hazard” is due to the immoral behaviour of economic agents from a social perspective. Their paper also compares and contrasts the predominantly normative conception of moral hazard found within the insurance-industry literature with the largely positive interpretations found within the economic literature. Often what is described as "moral hazard[s]" in the insurance literature is upon closer reading, a description of the closely related concept, adverse selection. Finance In 1998, William J. McDonough, head of the New York Federal Reserve, helped the counterparties of Long-Term Capital Management avoid losses by taking over the firm. This move was criticized by former Fed Chair Paul Volcker and others as increasing moral hazard. Tyler Cowen concludes that "creditors came to believe that their loans to unsound financial institutions would be made good by the Fed – as long as the collapse of those institutions would threaten the global credit system." Fed Chair, Alan Greenspan, while conceding the risk of moral hazard, defended the policy to orderly unwind Long-Term Capital by saying the world economy is at stake. Greenspan had himself been accused of creating wider moral hazard in markets by using the Greenspan put. Economist Paul Krugman described moral hazard as "any situation in which one person makes the decision about how much risk to take, while someone else bears the cost if things go badly." Financial bailouts of lending institutions by governments, central banks or other institutions can encourage risky lending in the future if those that take the risks come to believe that they will not have to carry the full burden of potential losses. Lending institutions need to take risks by making loans, and the riskiest loans usually have the potential for making the highest return. Taxpayers, depositors and other creditors often have to shoulder at least part of the burden of risky financial decisions made by lending institutions. Many have argued that certain types of mortgage securitization contribute to moral hazard. Mortgage securitization enables mortgage originators to pass on the risk that the mortgages they originate might default and not hold the mortgages on their balance sheets and assume the risk. In one kind of mortgage securitization, known as "agency securitizations," default risk is retained by the securitizing agency that buys the mortgages from originators. These agencies thus have an incentive to monitor originators and check loan quality. "Agency securitizations" refer to securitizations by either Ginnie Mae, a government agency, or by Fannie Mae and Freddie Mac, both for-profit government-sponsored enterprises. They are similar to the "covered bonds" that are commonly used in Western Europe in that the securitizing agency retains default risk. Under both models, investors take on only interest-rate risk, not default risk. In another type of securitization, known as "private label" securitization, default risk is generally not retained by the securitizing entity. Instead, the securitizing entity passes on default risk to investors. The securitizing entity, therefore, has relatively little incentive to monitor originators and maintain loan quality. "Private label" securitization refers to securitizations structured by financial institutions such as investment banks, commercial banks, and non-bank mortgage lenders. During the years leading up to the subprime mortgage crisis, private label securitizations grew as a share of overall mortgage securitization by purchasing and securitizing low-quality, high-risk mortgages. Agency Securitizations appear to have somewhat lowered their standards, but Agency mortgages remained considerably safer than mortgages in private-label securitizations and performed far better in terms of default rates. Economist Mark Zandi of Moody's Analytics described moral hazard as a root cause of the subprime mortgage crisis. He wrote that "the risks inherent in mortgage lending became so widely dispersed that no one was forced to worry about the quality of any single loan. As shaky mortgages were combined, diluting any problems into a larger pool, the incentive for responsibility was undermined." He also wrote, "Finance companies weren't subject to the same regulatory oversight as banks. Taxpayers weren't on the hook if they went belly up [pre-crisis], only their shareholders and other creditors were. Finance companies thus had little to discourage them from growing as aggressively as possible, even if that meant lowering or winking at traditional lending standards." Moral hazard can also occur with borrowers. Borrowers may not act prudently (in the view of the lender) when they invest or spend funds recklessly. For example, credit card companies often limit the amount borrowers can spend with their cards because without such limits, borrowers may spend borrowed funds recklessly, leading to default. Securitization of mortgages in America started in 1983 at Salomon Brothers and where the risk of each mortgage passed to the next purchaser instead of remaining with the original mortgaging institution. These mortgages and other debt instruments were put into a large pool of debt, and then shares in the pool were sold to many creditors. Thus, there is no one person responsible for verifying that any one particular loan is sound, that the assets securing that one particular loan are worth what they are supposed to be worth, that the borrower responsible for making payments on the loan can read and write the language in which the papers that he/she signed were written, or even that the paperwork exists and is in good order. It has been suggested that this may have caused the subprime mortgage crisis. Brokers, who were not lending their own money, pushed risk onto the lenders. Lenders, who sold mortgages soon after underwriting them, pushed risk onto investors. Investment banks bought mortgages and chopped up mortgage-backed securities into slices, some riskier than others. Investors bought securities and hedged against the risk of default and prepayment, pushing those risks further along. In a purely capitalist scenario, the last one holding the risk (like a game of musical chairs) is the one who faces the potential losses. In the sub-prime crisis, however, national credit authorities (the Federal Reserve in the US) assumed the ultimate risk on behalf of the citizenry at large. Others believe that financial bailouts of lending institutions do not encourage risky lending behavior since there is no guarantee to lending institutions that a bailout will occur. Decreased valuation of a corporation before any bailout would prevent risky, speculative business decisions by executives who fail to conduct proper due diligence in their business transactions. The risk and the burdens of loss became apparent to Lehman Brothers, which did not benefit from a bailout, and other financial institutions and mortgage companies such as Citibank and Countrywide Financial Corporation, whose valuation plunged during the subprime mortgage crisis. Incentives to moral hazard in accounting rules A 2017 report by the Basel Committee on Banking Supervision, an international regulator for the banking sector, noted that the accounting rules (IFRS # 9 and 13 in particular) leave entities significant discretion in determining financial instrument fair value and identified this discretion as a potential source of moral hazard: "The evidence consistent with accounting discretion as contributing to moral hazard behavior indicates that (additional) prudential valuation requirements may be justified." Banking regulators have taken actions to limit discretion and reduce valuation risk, i.e. the risk to banks' balance sheets arising from financial instrument valuation uncertainties. A row of regulatory documents has been issued, providing detailed prudential requirements that have many points of contact with the accounting rules and have the indirect effect of curbing the incentives for moral hazard by limiting the discretion left to banks in valuating financial instruments. Connection to financial crisis of 2007−08 Many scholars and journalists have argued that moral hazard played a role in the 2008 financial crisis, since numerous actors in the financial market may have had an incentive to increase their exposure to risk. In general, there are three ways in which moral hazard may have manifested itself in the lead up to the financial crisis: Asset managers may have had an incentive to take on more risk when managing other people's money, particularly if they were paid as a percentage of the fund's profits. If they took on more risk, they could expect higher payoff for themselves and were somewhat shielded from losses because they were spending other people's money. Therefore, asset managers may have been in a situation of moral hazard, where they would take on more risk than appropriate for a given client because they did not bear the cost of failure. Mortgage loan originators, such as Washington Mutual, may have had an incentive to understate the risk of loans they originated because the loans were often sold to mortgage pools (see mortgage-backed securities). Because loan originators were paid on a per-mortgage basis, they had an incentive to produce as many mortgages as possible, even if they were risky. Because these institutions did not expect to hold on to the loans until maturity, they could pass on the risk to the buyer of the loans. Therefore, mortgage loan originators may have been in a situation of moral hazard, because they did not bear the costs of the risky mortgages they were underwriting. Third, large banks may have believed they were "too big to fail." That is, because these banks were so ingrained in the US economy, the federal government would not have allowed them to fail in order to prevent a full-scale economic crash. This belief may have been shaped by the 1998 bailout of Long-Term Capital Management. "Too big to fail" banks may have believed they were essentially invincible to failure, thus putting them in a position of moral hazard: they could take on big risks – thus increasing their expected payoff – thinking that the federal government would bail them out in the event of a major failure. Therefore, large banks may have been in a situation of moral hazard, because they did not bear the costs of a catastrophic collapse. Notably, the Financial Crisis Inquiry Commission (FCIC), tasked by Congress with investigating the causes of the financial crisis, cited moral hazard as a component of the crisis, arguing that many factors, including deregulation in the derivatives market in 2000, reduced federal oversight, and the potential for government bailout of "too big to fail" institutions all played a role in increasing moral hazard in the years leading up to the collapse. Others have argued that moral hazard could not have played a role in the financial crisis for three main reasons. First, in the event of a catastrophic failure, a government bailout would only come after major losses for the company. So even if a bailout was expected it would not prevent the firm from taking losses. Second, there is some evidence that big banks were not expecting the crisis and thus were not expecting government bailouts, though the FCIC tried hard to contest this idea. Third, some have argued that negative externalities from corporate governance were a more important cause, since some risky investments may have had positive expected payoff for the firm but negative expected payoff to society. Insurance industry Moral hazard has been studied by insurers and academics, such as in the work of Kenneth Arrow, Tom Baker, and John Nyman. The name comes originally from the insurance industry. Insurance companies worried that protecting their clients from risks (like fire, or car accidents) might encourage those clients to behave in riskier ways (like smoking in bed or not wearing seatbelts). This problem may inefficiently discourage those companies from protecting their clients as much as the clients would like to be protected. Economists argue that the inefficiency results from information asymmetry. If insurance companies could perfectly observe the actions of their clients, they could deny coverage to clients choosing risky actions (like smoking in bed or not wearing seat belts), allowing them to provide thorough protection against risk (fire or accidents) without encouraging risky behavior. However, since insurance companies cannot perfectly observe their clients' actions, they are discouraged from providing the amount of protection that would be provided in a world with perfect information. Economists distinguish moral hazard from adverse selection, another problem that arises in the insurance industry, which is caused by hidden information, rather than hidden actions. The same underlying problem of non-observable actions also affects other contexts besides the insurance industry. It also arises in banking and finance: if a financial institution knows it is protected by a lender of last resort, it may make riskier investments than it would in the absence of the protection. In insurance markets, moral hazard occurs when the behavior of the insured party changes in a way that raises costs for the insurer since the insured party no longer bears the full costs of that behavior. Because individuals no longer bear the cost of medical services, they have an added incentive to ask for pricier and more elaborate medical service, which would otherwise not be necessary. In those instances, individuals have an incentive to over consume, simply because they no longer bear the full cost of medical services. Two types of behavior can change. One type is the risky behavior itself, resulting in a before the event moral hazard. Insured parties then behave in a more risky manner, resulting in more negative consequences that the insurer must pay for. For example, after purchasing automobile insurance, some may tend to be less careful about locking the automobile or choose to drive more, thereby increasing the risk of theft or an accident for the insurer. After purchasing fire insurance, some may tend to be less careful about preventing fires (say, by smoking in bed or neglecting to replace the batteries in fire alarms). A further example has been identified in flood risk management in which it is proposed that the possession of insurance undermines efforts to encourage people to integrate flood protection and resilience measures in properties exposed to flooding. A second type of behavior that may change is the reaction to the negative consequences of risk once they have occurred and insurance is provided to cover their costs. That may be called ex post (after the event) moral hazard. Insured parties then do not behave in a more risky manner that results in more negative consequences, but they ask an insurer to pay for more of the negative consequences from risk as insurance coverage increases. For example, without medical insurance, some may forgo medical treatment due to its costs and simply deal with substandard health. However, after medical insurance becomes available, some may ask an insurance provider to pay for the cost of medical treatment that would not have occurred otherwise. Sometimes moral hazard is so severe that it makes insurance policies impossible. Coinsurance, co-payments, and deductibles reduce the risk of moral hazard by increasing the out-of-pocket spending of consumers, which decreases their incentive to consume. These methods work by increasing out-of-pocket expenses for consumers, thereby reducing the incentive for the insured to engage in excessive consumption. For example, by requiring individuals to pay a portion of their health care costs through coinsurance, copayment, or deductibles, insurance providers can give people an incentive to consume less health care and avoid making unnecessary claims. This can help reduce moral hazard by aligning the interests of the insured and the insurer. Numerical example Consider a potential case of moral hazard in the health care market caused by the purchase of health insurance. Assume health care has constant marginal cost of $10 per unit and the individual's demand is given by Q = 20 − P. Assuming a perfectly competitive market, at equilibrium, the price will be $10 per unit and the individual will consume 10 units of health care. Now, consider the same individual with health insurance. Assume this health insurance makes health care free for the individual. In this case, the individual will have a price of $0 for the health care and thus will consume 20 units. The price will still be $10, but the insurance company would be the one bearing the costs. This example shows numerically how moral hazard could occur with health insurance. The individual consumes more health care than the equilibrium quantity because they don't bear the cost of the additional care. Economic theory In economic theory, moral hazard is a situation in which the behavior of one party may change to the detriment of another after the transaction has taken place. For example, a person with insurance against automobile theft may be less cautious about locking their car because the negative consequences of vehicle theft are now (partially) the responsibility of the insurance company. A party makes a decision about how much risk to take, while another party bears the costs if things go badly, and the party insulated from risk behaves differently from how it would if it were fully exposed to the risk. In microeconomics, agency theory analyses the relationship between the principal, the party who delegates decision making authority, and the agent, who executes the service. This theory is a key concept used to explore and resolve issues that have arisen within the relationship of agents and principals, which is known as the principal-agent problem. The theory is subdivided into two categories: (1) the moral hazard model and; (2) the adverse selection model. To summarise the latter, adverse selection arises when two parties hold unequal or asymmetric information. In the instance of contract theory (which encompasses agency theory), in the adverse selection model the agent holds private information before the contract is created with the principal, whereas in the moral hazard model the agent is informed of the withheld information privately after the contract is created with the principal. According to contract theory moral hazard results from a situation in which a hidden action occurs. Bengt Holmström said this: Moral hazard can be divided into two types when it involves asymmetric information (or lack of verifiability) of the outcome of a random event. An ex ante moral hazard is a change in behavior prior to the outcome of the random event, whereas ex post involves behavior after the outcome. For instance, in the case of a health insurance company insuring an individual during a specific time period, the final health of the individual can be thought of as the outcome. The individual taking greater risks during the period would be ex-ante moral hazard whereas lying about a fictitious health problem to defraud the insurance company would be ex post moral hazard. A second example is the case of a bank making a loan to an entrepreneur for a risky business venture. The entrepreneur becoming overly risky would be ex ante moral hazard, but willful default (wrongly claiming the venture failed when it was profitable) is ex post moral hazard. According to Hart and Holmström (1987), moral hazard models can be subdivided in models with hidden action and models with hidden information. In the former case, after the contract has been signed the agent chooses an action (such as an effort level) that cannot be observed by the principal. In the latter case, after the contract has been signed there is a random draw by nature that determines the agent's type (such as his valuation for a good or his costs of effort). In the literature, two reasons have been discussed why moral hazard may imply that the first-best solution (the solution that would be attained under complete information) is not achieved. Firstly, the agent may be risk-averse, so there is a trade-off between providing the agent with incentives and insuring the agent. Secondly, the agent may be risk-neutral but wealth-constrained and so the agent cannot make a payment to the principal and there is a trade-off between providing incentives and minimizing the agent's limited-liability rent. Among the early contributors to the contract-theoretic literature on moral hazard were Oliver Hart and Sanford J. Grossman. In the meantime, the moral hazard model has been extended to the cases of multiple periods and multiple tasks, both with risk-averse and risk-neutral agents. There are also models that combine hidden action and hidden information. Since there is no data on unobservable variables, it is quite difficult to be able to test directly the contract-theoretic moral hazard model, however there have been some successful indirect tests with field data. Direct tests of moral hazard theory are feasible in laboratory settings, using the tools of experimental economics. In such a setup, Hoppe and Schmitz (2018) have corroborated central insights of moral hazard theory. Managerial economics In the field of managerial economics, moral hazard refers to a situation in which an individual or entity engages in risky behaviour due to the knowledge that the costs associated with such behaviour will be borne by another party. This phenomenon often arises in the presence of information asymmetry, where one party possesses more information than the other. For instance, within an employment relationship, an employee may engage in risky behaviour with the understanding that any negative consequences will be absorbed by their employer. To mitigate the moral hazard, firms may implement various mechanisms such as performance-based incentives, monitoring and screening to align the interests of both parties and reduce the likelihood of risky behaviour. See also Brinkmanship Conflict of interest Economic inequality Feedback Free rider problem Game theory Information economics Necropolitics Offset hypothesis Perverse incentive Risk compensation Samaritan's dilemma Specieism Systemic risk Too big to fail Tragedy of the commons Unintended consequences Wild animal suffering References External links "What's so Moral about the Moral Hazard?" . Press.illinois.edu "Inside the Meltdown", PBS's Frontline episode uses the idea as a central theme Asymmetric information Financial risk Insurance Market failure United States housing bubble
Moral hazard
[ "Physics" ]
4,960
[ "Asymmetric information", "Symmetry", "Asymmetry" ]
175,596
https://en.wikipedia.org/wiki/Animal%20testing
Animal testing, also known as animal experimentation, animal research, and in vivo testing, is the use of non-human animals, such as model organisms, in experiments that seek to control the variables that affect the behavior or biological system under study. This approach can be contrasted with field studies in which animals are observed in their natural environments or habitats. Experimental research with animals is usually conducted in universities, medical schools, pharmaceutical companies, defense establishments, and commercial facilities that provide animal-testing services to the industry. The focus of animal testing varies on a continuum from pure research, focusing on developing fundamental knowledge of an organism, to applied research, which may focus on answering some questions of great practical importance, such as finding a cure for a disease. Examples of applied research include testing disease treatments, breeding, defense research, and toxicology, including cosmetics testing. In education, animal testing is sometimes a component of biology or psychology courses. Research using animal models has been central to most of the achievements of modern medicine. It has contributed to most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. The results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". Research in model organisms led to further medical advances, such as the production of the diphtheria antitoxin and the 1922 discovery of insulin and its use in treating diabetes, which had previously meant death. Modern general anaesthetics such as halothane were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Animal testing is widely used to aid in research of human disease when human experimentation would be unfeasible or unethical. This strategy is made possible by the common descent of all living organisms, and the conservation of metabolic and developmental pathways and genetic material over the course of evolution. Performing experiments in model organisms allows for better understanding the disease process without the added risk of harming an actual human. The species of the model organism is usually chosen so that it reacts to disease or its treatment in a way that resembles human physiology as needed. Biological activity in a model organism does not ensure an effect in humans, and care must be taken when generalizing from one organism to another. However, many drugs, treatments and cures for human diseases are developed in part with the guidance of animal models. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available. The annual use of vertebrate animals—from zebrafish to non-human primates—was estimated at 192 million as of 2015. In the European Union, vertebrate species represent 93% of animals used in research, and 11.5 million animals were used there in 2011. The mouse (Mus musculus) is associated with many important biological discoveries of the 20th and 21st centuries, and by one estimate, the number of mice and rats used in the United States alone in 2001 was 80 million. In 2013, it was reported that mammals (mice and rats), fish, amphibians, and reptiles together accounted for over 85% of research animals. In 2022, a law was passed in the United States that eliminated the FDA requirement that all drugs be tested on animals. Animal testing is regulated to varying degrees in different countries. In some cases it is strictly controlled while others have more relaxed regulations. There are ongoing debates about the ethics and necessity of animal testing. Proponents argue that it has led to significant advancements in medicine and other fields while opponents raise concerns about cruelty towards animals and question its effectiveness and reliability. There are efforts underway to find alternatives to animal testing such as computer simulation models, organs-on-chips technology that mimics human organs for lab tests, microdosing techniques which involve administering small doses of test compounds to human volunteers instead of non-human animals for safety tests or drug screenings; positron emission tomography (PET) scans which allow scanning of the human brain without harming humans; comparative epidemiological studies among human populations; simulators and computer programs for teaching purposes; among others. Definitions The terms animal testing, animal experimentation, animal research, in vivo testing, and vivisection have similar denotations but different connotations. Literally, "vivisection" means "live sectioning" of an animal, and historically referred only to experiments that involved the dissection of live animals. The term is occasionally used to refer pejoratively to any experiment using living animals; for example, the Encyclopædia Britannica defines "vivisection" as: "Operation on a living animal for experimental rather than healing purposes; more broadly, all experimentation on live animals", although dictionaries point out that the broader definition is "used only by people who are opposed to such work". The word has a negative connotation, implying torture, suffering, and death. The word "vivisection" is preferred by those opposed to this research, whereas scientists typically use the term "animal experimentation". The following text excludes as much as possible practices related to in vivo veterinary surgery, which is left to the discussion of vivisection. History The earliest references to animal testing are found in the writings of the Greeks in the 2nd and 4th centuries BCE. Aristotle and Erasistratus were among the first to perform experiments on living animals. Galen, a 2nd-century Roman physician, performed post-mortem dissections of pigs and goats. Avenzoar, a 12th-century Arabic physician in Moorish Spain introduced an experimental method of testing surgical procedures before applying them to human patients. Discoveries in the 18th and 19th centuries included Antoine Lavoisier's use of a guinea pig in a calorimeter to prove that respiration was a form of combustion, and Louis Pasteur's demonstration of the germ theory of disease in the 1880s using anthrax in sheep. Robert Koch used animal testing of mice and guinea pigs to discover the bacteria that cause anthrax and tuberculosis. In the 1890s, Ivan Pavlov famously used dogs to describe classical conditioning. Research using animal models has been central to most of the achievements of modern medicine. It has contributed most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. For example, the results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes. Drosophila became one of the first, and for some time the most widely used, model organisms, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". D. melanogaster remains one of the most widely used eukaryotic model organisms. During the same time period, studies on mouse genetics in the laboratory of William Ernest Castle in collaboration with Abbie Lathrop led to generation of the DBA ("dilute, brown and non-agouti") inbred mouse strain and the systematic generation of other inbred strains. The mouse has since been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. In the late 19th century, Emil von Behring isolated the diphtheria toxin and demonstrated its effects in guinea pigs. He went on to develop an antitoxin against diphtheria in animals and then in humans, which resulted in the modern methods of immunization and largely ended diphtheria as a threatening disease. The diphtheria antitoxin is famously commemorated in the Iditarod race, which is modeled after the delivery of antitoxin in the 1925 serum run to Nome. The success of animal studies in producing the diphtheria antitoxin has also been attributed as a cause for the decline of the early 20th-century opposition to animal research in the United States. Subsequent research in model organisms led to further medical advances, such as Frederick Banting's research in dogs, which determined that the isolates of pancreatic secretion could be used to treat dogs with diabetes. This led to the 1922 discovery of insulin (with John Macleod) and its use in treating diabetes, which had previously meant death. John Cade's research in guinea pigs discovered the anticonvulsant properties of lithium salts, which revolutionized the treatment of bipolar disorder, replacing the previous treatments of lobotomy or electroconvulsive therapy. Modern general anaesthetics, such as halothane and related compounds, were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. In the 1940s, Jonas Salk used rhesus monkey studies to isolate the most virulent forms of the polio virus, which led to his creation of a polio vaccine. The vaccine, which was made publicly available in 1955, reduced the incidence of polio 15-fold in the United States over the following five years. Albert Sabin improved the vaccine by passing the polio virus through animal hosts, including monkeys; the Sabin vaccine was produced for mass consumption in 1963, and had virtually eradicated polio in the United States by 1965. It has been estimated that developing and producing the vaccines required the use of 100,000 rhesus monkeys, with 65 doses of vaccine produced from each monkey. Sabin wrote in 1992, "Without the use of animals and human beings, it would have been impossible to acquire the important knowledge needed to prevent much suffering and premature death not only among humans, but also among animals." On 3 November 1957, a Soviet dog, Laika, became the first of many animals to orbit the Earth. In the 1970s, antibiotic treatments and vaccines for leprosy were developed using armadillos, then given to humans. The ability of humans to change the genetics of animals took an enormous step forward in 1974 when Rudolf Jaenisch could produce the first transgenic mammal, by integrating DNA from simians into the genome of mice. This genetic research progressed rapidly and, in 1996, Dolly the sheep was born, the first mammal to be cloned from an adult cell. Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available. Toxicology testing became important in the 20th century. In the 19th century, laws regulating drugs were more relaxed. For example, in the US, the government could only ban a drug after they had prosecuted a company for selling products that harmed customers. However, in response to the Elixir Sulfanilamide disaster of 1937 in which the eponymous drug killed over 100 users, the US Congress passed laws that required safety testing of drugs on animals before they could be marketed. Other countries enacted similar legislation. In the 1960s, in reaction to the Thalidomide tragedy, further laws were passed requiring safety testing on pregnant animals before a drug can be sold. Model organisms Invertebrates Although many more invertebrates than vertebrates are used in animal testing, these studies are largely unregulated by law. The most frequently used invertebrate species are Drosophila melanogaster, a fruit fly, and Caenorhabditis elegans, a nematode worm. In the case of C. elegans, the worm's body is completely transparent and the precise lineage of all the organism's cells is known, while studies in the fly D. melanogaster can use an amazing array of genetic tools. These invertebrates offer some advantages over vertebrates in animal testing, including their short life cycle and the ease with which large numbers may be housed and studied. However, the lack of an adaptive immune system and their simple organs prevent worms from being used in several aspects of medical research such as vaccine development. Similarly, the fruit fly immune system differs greatly from that of humans, and diseases in insects can be different from diseases in vertebrates; however, fruit flies and waxworms can be useful in studies to identify novel virulence factors or pharmacologically active compounds. Several invertebrate systems are considered acceptable alternatives to vertebrates in early-stage discovery screens. Because of similarities between the innate immune system of insects and mammals, insects can replace mammals in some types of studies. Drosophila melanogaster and the Galleria mellonella waxworm have been particularly important for analysis of virulent traits of mammalian pathogens. Waxworms and other insects have also proven valuable for the identification of pharmaceutical compounds with favorable bioavailability. The decision to adopt such models generally involves accepting a lower degree of biological similarity with mammals for significant gains in experimental throughput. Rodents In the U.S., the numbers of rats and mice used is estimated to be from 11 million to between 20 and 100 million a year. Other rodents commonly used are guinea pigs, hamsters, and gerbils. Mice are the most commonly used vertebrate species because of their size, low cost, ease of handling, and fast reproduction rate. Mice are widely considered to be the best model of inherited human disease and share 95% of their genes with humans. With the advent of genetic engineering technology, genetically modified mice can be generated to order and can provide models for a range of human diseases. Rats are also widely used for physiology, toxicology and cancer research, but genetic manipulation is much harder in rats than in mice, which limits the use of these rodents in basic science. Dogs Dogs are widely used in biomedical research, testing, and education—particularly beagles, because they are gentle and easy to handle, and to allow for comparisons with historical data from beagles (a Reduction technique). They are used as models for human and veterinary diseases in cardiology, endocrinology, and bone and joint studies, research that tends to be highly invasive, according to the Humane Society of the United States. The most common use of dogs is in the safety assessment of new medicines for human or veterinary use as a second species following testing in rodents, in accordance with the regulations set out in the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. One of the most significant advancements in medical science involves the use of dogs in developing the answers to insulin production in the body for diabetics and the role of the pancreas in this process. They found that the pancreas was responsible for producing insulin in the body and that removal of the pancreas, resulted in the development of diabetes in the dog. After re-injecting the pancreatic extract (insulin), the blood glucose levels were significantly lowered. The advancements made in this research involving the use of dogs has resulted in a definite improvement in the quality of life for both humans and animals. The U.S. Department of Agriculture's Animal Welfare Report shows that 60,979 dogs were used in USDA-registered facilities in 2016. In the UK, according to the UK Home Office, there were 3,847 procedures on dogs in 2017. Of the other large EU users of dogs, Germany conducted 3,976 procedures on dogs in 2016 and France conducted 4,204 procedures in 2016. In both cases this represents under 0.2% of the total number of procedures conducted on animals in the respective countries. Zebrafish Zebrafish are commonly used for the basic study and development of various cancers. Used to explore the immune system and genetic strains. They are low in cost, small in size, have a fast reproduction rate, and able to observe cancer cells in real time. Humans and zebrafish share neoplasm similarities which is why they are used for research. The National Library of Medicine shows many examples of the types of cancer zebrafish are used in. The use of zebrafish have allowed them to find differences between MYC-driven pre-B vs T-ALL and be exploited to discover novel pre-B ALL therapies on acute lymphocytic leukemia. The National Library of Medicine also explains how a neoplasm is difficult to diagnose at an early stage. Understanding the molecular mechanism of digestive tract tumorigenesis and searching for new treatments is the current research. Zebrafish and humans share similar gastric cancer cells in the gastric cancer xenotransplantation model. This allowed researchers to find that Triphala could inhibit the growth and metastasis of gastric cancer cells. Since zebrafish liver cancer genes are related with humans they have become widely used in liver cancer search, as will as many other cancers. Non-human primates Non-human primates (NHPs) are used in toxicology tests, studies of AIDS and hepatitis, studies of neurology, behavior and cognition, reproduction, genetics, and xenotransplantation. They are caught in the wild or purpose-bred. In the United States and China, most primates are domestically purpose-bred, whereas in Europe the majority are imported purpose-bred. The European Commission reported that in 2011, 6,012 monkeys were experimented on in European laboratories. According to the U.S. Department of Agriculture, there were 71,188 monkeys in U.S. laboratories in 2016. 23,465 monkeys were imported into the U.S. in 2014 including 929 who were caught in the wild. Most of the NHPs used in experiments are macaques; but marmosets, spider monkeys, and squirrel monkeys are also used, and baboons and chimpanzees are used in the US. , there are approximately 730 chimpanzees in U.S. laboratories. In a survey in 2003, it was found that 89% of singly-housed primates exhibited self-injurious or abnormal stereotypyical behaviors including pacing, rocking, hair pulling, and biting among others. The first transgenic primate was produced in 2001, with the development of a method that could introduce new genes into a rhesus macaque. This transgenic technology is now being applied in the search for a treatment for the genetic disorder Huntington's disease. Notable studies on non-human primates have been part of the polio vaccine development, and development of Deep Brain Stimulation, and their current heaviest non-toxicological use occurs in the monkey AIDS model, SIV. In 2008, a proposal to ban all primates experiments in the EU has sparked a vigorous debate. Other species Over 500,000 fish and 9,000 amphibians were used in the UK in 2016. The main species used is the zebrafish, Danio rerio, which are translucent during their embryonic stage, and the African clawed frog, Xenopus laevis. Over 20,000 rabbits were used for animal testing in the UK in 2004. Albino rabbits are used in eye irritancy tests (Draize test) because rabbits have less tear flow than other animals, and the lack of eye pigment in albinos make the effects easier to visualize. The numbers of rabbits used for this purpose has fallen substantially over the past two decades. In 1996, there were 3,693 procedures on rabbits for eye irritation in the UK, and in 2017 this number was just 63. Rabbits are also frequently used for the production of polyclonal antibodies. Cats are most commonly used in neurological research. In 2016, 18,898 cats were used in the United States alone, around a third of which were used in experiments which have the potential to cause "pain and/or distress" though only 0.1% of cat experiments involved potential pain which was not relieved by anesthetics/analgesics. In the UK, just 198 procedures were carried out on cats in 2017. The number has been around 200 for most of the last decade. Care and use of animals Regulations and laws The regulations that apply to animals in laboratories vary across species. In the U.S., under the Animal Welfare Act and the Guide for the Care and Use of Laboratory Animals (the Guide), published by the National Academy of Sciences, any procedure can be performed on an animal if it can be successfully argued that it is scientifically justified. Researchers are required to consult with the institution's veterinarian and its Institutional Animal Care and Use Committee (IACUC), which every research facility is obliged to maintain. The IACUC must ensure that alternatives, including non-animal alternatives, have been considered, that the experiments are not unnecessarily duplicative, and that pain relief is given unless it would interfere with the study. The IACUCs regulate all vertebrates in testing at institutions receiving federal funds in the USA. Although the Animal Welfare Act does not include purpose-bred rodents and birds, these species are equally regulated under Public Health Service policies that govern the IACUCs. The Public Health Service policy oversees the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC). The CDC conducts infectious disease research on nonhuman primates, rabbits, mice, and other animals, while FDA requirements cover use of animals in pharmaceutical research. Animal Welfare Act (AWA) regulations are enforced by the USDA, whereas Public Health Service regulations are enforced by OLAW and in many cases by AAALAC. According to the 2014 U.S. Department of Agriculture Office of the Inspector General (OIG) report—which looked at the oversight of animal use during a three-year period—"some Institutional Animal Care and Use Committees ...did not adequately approve, monitor, or report on experimental procedures on animals". The OIG found that "as a result, animals are not always receiving basic humane care and treatment and, in some cases, pain and distress are not minimized during and after experimental procedures". According to the report, within a three-year period, nearly half of all American laboratories with regulated species were cited for AWA violations relating to improper IACUC oversight. The USDA OIG made similar findings in a 2005 report. With only a broad number of 120 inspectors, the United States Department of Agriculture (USDA) oversees more than 12,000 facilities involved in research, exhibition, breeding, or dealing of animals. Others have criticized the composition of IACUCs, asserting that the committees are predominantly made up of animal researchers and university representatives who may be biased against animal welfare concerns. Larry Carbone, a laboratory animal veterinarian, writes that, in his experience, IACUCs take their work very seriously regardless of the species involved, though the use of non-human primates always raises what he calls a "red flag of special concern". A study published in Science magazine in July 2001 confirmed the low reliability of IACUC reviews of animal experiments. Funded by the National Science Foundation, the three-year study found that animal-use committees that do not know the specifics of the university and personnel do not make the same approval decisions as those made by animal-use committees that do know the university and personnel. Specifically, blinded committees more often ask for more information rather than approving studies. Scientists in India are protesting a recent guideline issued by the University Grants Commission to ban the use of live animals in universities and laboratories. Numbers Accurate global figures for animal testing are difficult to obtain; it has been estimated that 100 million vertebrates are experimented on around the world every year, 10–11 million of them in the EU. The Nuffield Council on Bioethics reports that global annual estimates range from 50 to 100 million animals. None of the figures include invertebrates such as shrimp and fruit flies.The USDA/APHIS has published the 2016 animal research statistics. Overall, the number of animals (covered by the Animal Welfare Act) used in research in the US rose 6.9% from 767,622 (2015) to 820,812 (2016). This includes both public and private institutions. By comparing with EU data, where all vertebrate species are counted, Speaking of Research estimated that around 12 million vertebrates were used in research in the US in 2016. A 2015 article published in the Journal of Medical Ethics, argued that the use of animals in the US has dramatically increased in recent years. Researchers found this increase is largely the result of an increased reliance on genetically modified mice in animal studies. In 1995, researchers at Tufts University Center for Animals and Public Policy estimated that 14–21 million animals were used in American laboratories in 1992, a reduction from a high of 50 million used in 1970. In 1986, the U.S. Congress Office of Technology Assessment reported that estimates of the animals used in the U.S. range from 10 million to upwards of 100 million each year, and that their own best estimate was at least 17 million to 22 million. In 2016, the Department of Agriculture listed 60,979 dogs, 18,898 cats, 71,188 non-human primates, 183,237 guinea pigs, 102,633 hamsters, 139,391 rabbits, 83,059 farm animals, and 161,467 other mammals, a total of 820,812, a figure that includes all mammals except purpose-bred mice and rats. The use of dogs and cats in research in the U.S. decreased from 1973 to 2016 from 195,157 to 60,979, and from 66,165 to 18,898, respectively. In the UK, Home Office figures show that 3.79 million procedures were carried out in 2017. 2,960 procedures used non-human primates, down over 50% since 1988. A "procedure" refers here to an experiment that might last minutes, several months, or years. Most animals are used in only one procedure: animals are frequently euthanized after the experiment; however death is the endpoint of some procedures. The procedures conducted on animals in the UK in 2017 were categorised as: 43% (1.61 million) sub-threshold, 4% (0.14 million) non-recovery, 36% (1.35 million) mild, 15% (0.55 million) moderate, and 4% (0.14 million) severe. A 'severe' procedure would be, for instance, any test where death is the end-point or fatalities are expected, whereas a 'mild' procedure would be something like a blood test or an MRI scan. The Three Rs The Three Rs (3Rs) are guiding principles for more ethical use of animals in testing. These were first described by W.M.S. Russell and R.L. Burch in 1959. The 3Rs state: Replacement which refers to the preferred use of non-animal methods over animal methods whenever it is possible to achieve the same scientific aims. These methods include computer modeling. Reduction which refers to methods that enable researchers to obtain comparable levels of information from fewer animals, or to obtain more information from the same number of animals. Refinement which refers to methods that alleviate or minimize potential pain, suffering or distress, and enhance animal welfare for the animals used. These methods include non-invasive techniques. The 3Rs have a broader scope than simply encouraging alternatives to animal testing, but aim to improve animal welfare and scientific quality where the use of animals can not be avoided. These 3Rs are now implemented in many testing establishments worldwide and have been adopted by various pieces of legislation and regulations. Despite the widespread acceptance of the 3Rs, many countries—including Canada, Australia, Israel, South Korea, and Germany—have reported rising experimental use of animals in recent years with increased use of mice and, in some cases, fish while reporting declines in the use of cats, dogs, primates, rabbits, guinea pigs, and hamsters. Along with other countries, China has also escalated its use of GM animals, resulting in an increase in overall animal use. Sources Animals used by laboratories are largely supplied by specialist dealers. Sources differ for vertebrate and invertebrate animals. Most laboratories breed and raise flies and worms themselves, using strains and mutants supplied from a few main stock centers. For vertebrates, sources include breeders and dealers including Fortrea and Charles River Laboratories, which supply purpose-bred and wild-caught animals; businesses that trade in wild animals such as Nafovanny; and dealers who supply animals sourced from pounds, auctions, and newspaper ads. Animal shelters also supply the laboratories directly. Large centers also exist to distribute strains of genetically modified animals; the International Knockout Mouse Consortium, for example, aims to provide knockout mice for every gene in the mouse genome. In the U.S., Class A breeders are licensed by the U.S. Department of Agriculture (USDA) to sell animals for research purposes, while Class B dealers are licensed to buy animals from "random sources" such as auctions, pound seizure, and newspaper ads. Some Class B dealers have been accused of kidnapping pets and illegally trapping strays, a practice known as bunching. It was in part out of public concern over the sale of pets to research facilities that the 1966 Laboratory Animal Welfare Act was ushered in—the Senate Committee on Commerce reported in 1966 that stolen pets had been retrieved from Veterans Administration facilities, the Mayo Institute, the University of Pennsylvania, Stanford University, and Harvard and Yale Medical Schools. The USDA recovered at least a dozen stolen pets during a raid on a Class B dealer in Arkansas in 2003. Four states in the U.S.—Minnesota, Utah, Oklahoma, and Iowa—require their shelters to provide animals to research facilities. Fourteen states explicitly prohibit the practice, while the remainder either allow it or have no relevant legislation. In the European Union, animal sources are governed by Council Directive 86/609/EEC, which requires lab animals to be specially bred, unless the animal has been lawfully imported and is not a wild animal or a stray. The latter requirement may also be exempted by special arrangement. In 2010 the Directive was revised with EU Directive 2010/63/EU. In the UK, most animals used in experiments are bred for the purpose under the 1988 Animal Protection Act, but wild-caught primates may be used if exceptional and specific justification can be established. The United States also allows the use of wild-caught primates; between 1995 and 1999, 1,580 wild baboons were imported into the U.S. Most of the primates imported are handled by Charles River Laboratories or by Fortrea, which are very active in the international primate trade. Pain and suffering The extent to which animal testing causes pain and suffering, and the capacity of animals to experience and comprehend them, is the subject of much debate. According to the USDA, in 2016 501,560 animals (61%) (not including rats, mice, birds, or invertebrates) were used in procedures that did not include more than momentary pain or distress. 247,882 (31%) animals were used in procedures in which pain or distress was relieved by anesthesia, while 71,370 (9%) were used in studies that would cause pain or distress that would not be relieved. The idea that animals might not feel pain as human beings feel it traces back to the 17th-century French philosopher, René Descartes, who argued that animals do not experience pain and suffering because they lack consciousness. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, he was regularly asked to "prove" that animals are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Carbone writes that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support, some critics continue to question how reliably animal mental states can be determined. However, some canine experts are stating that, while intelligence does differ animal to animal, dogs have the intelligence of a two to two-and-a-half-year old. This does support the idea that dogs, at the very least, have some form of consciousness. The ability of invertebrates to experience pain and suffering is less clear, however, legislation in several countries (e.g. U.K., New Zealand, Norway) protects some invertebrate species if they are being used in animal testing. In the U.S., the defining text on animal welfare regulation in animal testing is the Guide for the Care and Use of Laboratory Animals. This defines the parameters that govern animal testing in the U.S. It states "The ability to experience and respond to pain is widespread in the animal kingdom...Pain is a stressor and, if not relieved, can lead to unacceptable levels of stress and distress in animals." The Guide states that the ability to recognize the symptoms of pain in different species is vital in efficiently applying pain relief and that it is essential for the people caring for and using animals to be entirely familiar with these symptoms. On the subject of analgesics used to relieve pain, the Guide states "The selection of the most appropriate analgesic or anesthetic should reflect professional judgment as to which best meets clinical and humane requirements without compromising the scientific aspects of the research protocol". Accordingly, all issues of animal pain and distress, and their potential treatment with analgesia and anesthesia, are required regulatory issues in receiving animal protocol approval. Currently, traumatic methods of marking laboratory animals are being replaced with non-invasive alternatives. In 2019, Katrien Devolder and Matthias Eggel proposed gene editing research animals to remove the ability to feel pain. This would be an intermediate step towards eventually stopping all experimentation on animals and adopting alternatives. Additionally, this would not stop research animals from experiencing psychological harm. Euthanasia Regulations require that scientists use as few animals as possible, especially for terminal experiments. However, while policy makers consider suffering to be the central issue and see animal euthanasia as a way to reduce suffering, others, such as the RSPCA, argue that the lives of laboratory animals have intrinsic value. Regulations focus on whether particular methods cause pain and suffering, not whether their death is undesirable in itself. The animals are euthanized at the end of studies for sample collection or post-mortem examination; during studies if their pain or suffering falls into certain categories regarded as unacceptable, such as depression, infection that is unresponsive to treatment, or the failure of large animals to eat for five days; or when they are unsuitable for breeding or unwanted for some other reason. Methods of euthanizing laboratory animals are chosen to induce rapid unconsciousness and death without pain or distress. The methods that are preferred are those published by councils of veterinarians. The animal can be made to inhale a gas, such as carbon monoxide and carbon dioxide, by being placed in a chamber, or by use of a face mask, with or without prior sedation or anesthesia. Sedatives or anesthetics such as barbiturates can be given intravenously, or inhalant anesthetics may be used. Amphibians and fish may be immersed in water containing an anesthetic such as tricaine. Physical methods are also used, with or without sedation or anesthesia depending on the method. Recommended methods include decapitation (beheading) for small rodents or rabbits. Cervical dislocation (breaking the neck or spine) may be used for birds, mice, rats, and rabbits depending on the size and weight of the animal. High-intensity microwave irradiation of the brain can preserve brain tissue and induce death in less than 1 second, but this is currently only used on rodents. Captive bolts may be used, typically on dogs, ruminants, horses, pigs and rabbits. It causes death by a concussion to the brain. Gunshot may be used, but only in cases where a penetrating captive bolt may not be used. Some physical methods are only acceptable after the animal is unconscious. Electrocution may be used for cattle, sheep, swine, foxes, and mink after the animals are unconscious, often by a prior electrical stun. Pithing (inserting a tool into the base of the brain) is usable on animals already unconscious. Slow or rapid freezing, or inducing air embolism are acceptable only with prior anesthesia to induce unconsciousness. Research classification Pure research Basic or pure research investigates how organisms behave, develop, and function. Those opposed to animal testing object that pure research may have little or no practical purpose, but researchers argue that it forms the necessary basis for the development of applied research, rendering the distinction between pure and applied research—research that has a specific practical aim—unclear. Pure research uses larger numbers and a greater variety of animals than applied research. Fruit flies, nematode worms, mice and rats together account for the vast majority, though small numbers of other species are used, ranging from sea slugs through to armadillos. Examples of the types of animals and experiments used in basic research include: Studies on embryogenesis and developmental biology. Mutants are created by adding transposons into their genomes, or specific genes are deleted by gene targeting. By studying the changes in development these changes produce, scientists aim to understand both how organisms normally develop, and what can go wrong in this process. These studies are particularly powerful since the basic controls of development, such as the homeobox genes, have similar functions in organisms as diverse as fruit flies and man. Experiments into behavior, to understand how organisms detect and interact with each other and their environment, in which fruit flies, worms, mice, and rats are all widely used. Studies of brain function, such as memory and social behavior, often use rats and birds. For some species, behavioral research is combined with enrichment strategies for animals in captivity because it allows them to engage in a wider range of activities. Breeding experiments to study evolution and genetics. Laboratory mice, flies, fish, and worms are inbred through many generations to create strains with defined characteristics. These provide animals of a known genetic background, an important tool for genetic analyses. Larger mammals are rarely bred specifically for such studies due to their slow rate of reproduction, though some scientists take advantage of inbred domesticated animals, such as dog or cattle breeds, for comparative purposes. Scientists studying how animals evolve use many animal species to see how variations in where and how an organism lives (their niche) produce adaptations in their physiology and morphology. As an example, sticklebacks are now being used to study how many and which types of mutations are selected to produce adaptations in animals' morphology during the evolution of new species. Applied research Applied research aims to solve specific and practical problems. These may involve the use of animal models of diseases or conditions, which are often discovered or generated by pure research programmes. In turn, such applied studies may be an early stage in the drug discovery process. Examples include: Genetic modification of animals to study disease. Transgenic animals have specific genes inserted, modified or removed, to mimic specific conditions such as single gene disorders, such as Huntington's disease. Other models mimic complex, multifactorial diseases with genetic components, such as diabetes, or even transgenic mice that carry the same mutations that occur during the development of cancer. These models allow investigations on how and why the disease develops, as well as providing ways to develop and test new treatments. The vast majority of these transgenic models of human disease are lines of mice, the mammalian species in which genetic modification is most efficient. Smaller numbers of other animals are also used, including rats, pigs, sheep, fish, birds, and amphibians. Studies on models of naturally occurring disease and condition. Certain domestic and wild animals have a natural propensity or predisposition for certain conditions that are also found in humans. Cats are used as a model to develop immunodeficiency virus vaccines and to study leukemia because their natural predisposition to FIV and Feline leukemia virus. Certain breeds of dog experience narcolepsy making them the major model used to study the human condition. Armadillos and humans are among only a few animal species that naturally have leprosy; as the bacteria responsible for this disease cannot yet be grown in culture, armadillos are the primary source of bacilli used in leprosy vaccines. Studies on induced animal models of human diseases. Here, an animal is treated so that it develops pathology and symptoms that resemble a human disease. Examples include restricting blood flow to the brain to induce stroke, or giving neurotoxins that cause damage similar to that seen in Parkinson's disease. Much animal research into potential treatments for humans is wasted because it is poorly conducted and not evaluated through systematic reviews. For example, although such models are now widely used to study Parkinson's disease, the British anti-vivisection interest group BUAV argues that these models only superficially resemble the disease symptoms, without the same time course or cellular pathology. In contrast, scientists assessing the usefulness of animal models of Parkinson's disease, as well as the medical research charity The Parkinson's Appeal, state that these models were invaluable and that they led to improved surgical treatments such as pallidotomy, new drug treatments such as levodopa, and later deep brain stimulation. Animal testing has also included the use of placebo testing. In these cases animals are treated with a substance that produces no pharmacological effect, but is administered in order to determine any biological alterations due to the experience of a substance being administered, and the results are compared with those obtained with an active compound. Xenotransplantation Xenotransplantation research involves transplanting tissues or organs from one species to another, as a way to overcome the shortage of human organs for use in organ transplants. Current research involves using primates as the recipients of organs from pigs that have been genetically modified to reduce the primates' immune response against the pig tissue. Although transplant rejection remains a problem, recent clinical trials that involved implanting pig insulin-secreting cells into diabetics did reduce these people's need for insulin. Documents released to the news media by the animal rights organization Uncaged Campaigns showed that, between 1994 and 2000, wild baboons imported to the UK from Africa by Imutran Ltd, a subsidiary of Novartis Pharma AG, in conjunction with Cambridge University and Huntingdon Life Sciences, to be used in experiments that involved grafting pig tissues, had serious and sometimes fatal injuries. A scandal occurred when it was revealed that the company had communicated with the British government in an attempt to avoid regulation. Toxicology testing Toxicology testing, also known as safety testing, is conducted by pharmaceutical companies testing drugs, or by contract animal testing facilities, such as Huntingdon Life Sciences, on behalf of a wide variety of customers. According to 2005 EU figures, around one million animals are used every year in Europe in toxicology tests; which are about 10% of all procedures. According to Nature, 5,000 animals are used for each chemical being tested, with 12,000 needed to test pesticides. The tests are conducted without anesthesia, because interactions between drugs can affect how animals detoxify chemicals, and may interfere with the results. Toxicology tests are used to examine finished products such as pesticides, medications, food additives, packing materials, and air freshener, or their chemical ingredients. Most tests involve testing ingredients rather than finished products, but according to BUAV, manufacturers believe these tests overestimate the toxic effects of substances; they therefore repeat the tests using their finished products to obtain a less toxic label. The substances are applied to the skin or dripped into the eyes; injected intravenously, intramuscularly, or subcutaneously; inhaled either by placing a mask over the animals and restraining them, or by placing them in an inhalation chamber; or administered orally, through a tube into the stomach, or simply in the animal's food. Doses may be given once, repeated regularly for many months, or for the lifespan of the animal. There are several different types of acute toxicity tests. The ("Lethal Dose 50%") test is used to evaluate the toxicity of a substance by determining the dose required to kill 50% of the test animal population. This test was removed from OECD international guidelines in 2002, replaced by methods such as the fixed dose procedure, which use fewer animals and cause less suffering. Abbott writes that, as of 2005, "the LD50 acute toxicity test ... still accounts for one-third of all animal [toxicity] tests worldwide". Irritancy can be measured using the Draize test, where a test substance is applied to an animal's eyes or skin, usually an albino rabbit. For Draize eye testing, the test involves observing the effects of the substance at intervals and grading any damage or irritation, but the test should be halted and the animal killed if it shows "continuing signs of severe pain or distress". The Humane Society of the United States writes that the procedure can cause redness, ulceration, hemorrhaging, cloudiness, or even blindness. This test has also been criticized by scientists for being cruel and inaccurate, subjective, over-sensitive, and failing to reflect human exposures in the real world. Although no accepted in vitro alternatives exist, a modified form of the Draize test called the low volume eye test may reduce suffering and provide more realistic results and this was adopted as the new standard in September 2009. However, the Draize test will still be used for substances that are not severe irritants. The most stringent tests are reserved for drugs and foodstuffs. For these, a number of tests are performed, lasting less than a month (acute), one to three months (subchronic), and more than three months (chronic) to test general toxicity (damage to organs), eye and skin irritancy, mutagenicity, carcinogenicity, teratogenicity, and reproductive problems. The cost of the full complement of tests is several million dollars per substance and it may take three or four years to complete. These toxicity tests provide, in the words of a 2006 United States National Academy of Sciences report, "critical information for assessing hazard and risk potential". Animal tests may overestimate risk, with false positive results being a particular problem, but false positives appear not to be prohibitively common. Variability in results arises from using the effects of high doses of chemicals in small numbers of laboratory animals to try to predict the effects of low doses in large numbers of humans. Although relationships do exist, opinion is divided on how to use data on one species to predict the exact level of risk in another. Scientists face growing pressure to move away from using traditional animal toxicity tests to determine whether manufactured chemicals are safe. Among variety of approaches to toxicity evaluation the ones which have attracted increasing interests are in vitro cell-based sensing methods applying fluorescence. Cosmetics testing Cosmetics testing on animals is particularly controversial. Such tests, which are still conducted in the U.S., involve general toxicity, eye and skin irritancy, phototoxicity (toxicity triggered by ultraviolet light) and mutagenicity. Cosmetics testing on animals is banned in India, the United Kingdom, the European Union, Israel and Norway while legislation in the U.S. and Brazil is currently considering similar bans. In 2002, after 13 years of discussion, the European Union agreed to phase in a near-total ban on the sale of animal-tested cosmetics by 2009, and to ban all cosmetics-related animal testing. France, which is home to the world's largest cosmetics company, L'Oreal, has protested the proposed ban by lodging a case at the European Court of Justice in Luxembourg, asking that the ban be quashed. The ban is also opposed by the European Federation for Cosmetics Ingredients, which represents 70 companies in Switzerland, Belgium, France, Germany, and Italy. In October 2014, India passed stricter laws that also ban the importation of any cosmetic products that are tested on animals. Drug testing Before the early 20th century, laws regulating drugs were lax. Currently, all new pharmaceuticals undergo rigorous animal testing before being licensed for human use. Tests on pharmaceutical products involve: metabolic tests, investigating pharmacokinetics—how drugs are absorbed, metabolized and excreted by the body when introduced orally, intravenously, intraperitoneally, intramuscularly, or transdermally. toxicology tests, which gauge acute, sub-acute, and chronic toxicity. Acute toxicity is studied by using a rising dose until signs of toxicity become apparent. Current European legislation demands that "acute toxicity tests must be carried out in two or more mammalian species" covering "at least two different routes of administration". Sub-acute toxicity is where the drug is given to the animals for four to six weeks in doses below the level at which it causes rapid poisoning, in order to discover if any toxic drug metabolites build up over time. Testing for chronic toxicity can last up to two years and, in the European Union, is required to involve two species of mammals, one of which must be non-rodent. efficacy studies, which test whether experimental drugs work by inducing the appropriate illness in animals. The drug is then administered in a double-blind controlled trial, which allows researchers to determine the effect of the drug and the dose-response curve. Specific tests on reproductive function, embryonic toxicity, or carcinogenic potential can all be required by law, depending on the result of other studies and the type of drug being tested. Education It is estimated that 20 million animals are used annually for educational purposes in the United States including, classroom observational exercises, dissections and live-animal surgeries. Frogs, fetal pigs, perch, cats, earthworms, grasshoppers, crayfish and starfish are commonly used in classroom dissections. Alternatives to the use of animals in classroom dissections are widely used, with many U.S. States and school districts mandating students be offered the choice to not dissect. Citing the wide availability of alternatives and the decimation of local frog species, India banned dissections in 2014. The Sonoran Arthropod Institute hosts an annual Invertebrates in Education and Conservation Conference to discuss the use of invertebrates in education. There also are efforts in many countries to find alternatives to using animals in education. The NORINA database, maintained by Norecopa, lists products that may be used as alternatives or supplements to animal use in education, and in the training of personnel who work with animals. These include alternatives to dissection in schools. InterNICHE has a similar database and a loans system. In November 2013, the U.S.-based company Backyard Brains released for sale to the public what they call the "Roboroach", an "electronic backpack" that can be attached to cockroaches. The operator is required to amputate a cockroach's antennae, use sandpaper to wear down the shell, insert a wire into the thorax, and then glue the electrodes and circuit board onto the insect's back. A mobile phone app can then be used to control it via Bluetooth. It has been suggested that the use of such a device may be a teaching aid that can promote interest in science. The makers of the "Roboroach" have been funded by the National Institute of Mental Health and state that the device is intended to encourage children to become interested in neuroscience. Defense Animals are used by the military to develop weapons, vaccines, battlefield surgical techniques, and defensive clothing. For example, in 2008 the United States Defense Advanced Research Projects Agency used live pigs to study the effects of improvised explosive device explosions on internal organs, especially the brain. In the US military, goats are commonly used to train combat medics. (Goats have become the main animal species used for this purpose after the Pentagon phased out using dogs for medical training in the 1980s.) While modern mannequins used in medical training are quite efficient in simulating the behavior of a human body, some trainees feel that "the goat exercise provide[s] a sense of urgency that only real life trauma can provide". Nevertheless, in 2014, the U.S. Coast Guard announced that it would reduce the number of animals it uses in its training exercises by half after PETA released video showing Guard members cutting off the limbs of unconscious goats with tree trimmers and inflicting other injuries with a shotgun, pistol, ax and a scalpel. That same year, citing the availability of human simulators and other alternatives, the Department of Defense announced it would begin reducing the number of animals it uses in various training programs. In 2013, several Navy medical centers stopped using ferrets in intubation exercises after complaints from PETA. Besides the United States, six out of 28 NATO countries, including Poland and Denmark, use live animals for combat medic training. Ethics Most animals are euthanized after being used in an experiment. Sources of laboratory animals vary between countries and species; most animals are purpose-bred, while a minority are caught in the wild or supplied by dealers who obtain them from auctions and pounds. Supporters of the use of animals in experiments, such as the British Royal Society, argue that virtually every medical achievement in the 20th century relied on the use of animals in some way. The Institute for Laboratory Animal Research of the United States National Academy of Sciences has argued that animal testing cannot be replaced by even sophisticated computer models, which are unable to deal with the extremely complex interactions between molecules, cells, tissues, organs, organisms and the environment. Animal rights organizations—such as PETA and BUAV—question the need for and legitimacy of animal testing, arguing that it is cruel and poorly regulated, that medical progress is actually held back by misleading animal models that cannot reliably predict effects in humans, that some of the tests are outdated, that the costs outweigh the benefits, or that animals have the intrinsic right not to be used or harmed in experimentation. Viewpoints The moral and ethical questions raised by performing experiments on animals are subject to debate, and viewpoints have shifted significantly over the 20th century. There remain disagreements about which procedures are useful for which purposes, as well as disagreements over which ethical principles apply to which species. A 2015 Gallup poll found that 67% of Americans were "very concerned" or "somewhat concerned" about animals used in research. A Pew poll taken the same year found 50% of American adults opposed the use of animals in research. Still, a wide range of viewpoints exist. The view that animals have moral rights (animal rights) is a philosophical position proposed by Tom Regan, among others, who argues that animals are beings with beliefs and desires, and as such are the "subjects of a life" with moral value and therefore moral rights. Regan still sees ethical differences between killing human and non-human animals, and argues that to save the former it is permissible to kill the latter. Likewise, a "moral dilemma" view suggests that avoiding potential benefit to humans is unacceptable on similar grounds, and holds the issue to be a dilemma in balancing such harm to humans to the harm done to animals in research. In contrast, an abolitionist view in animal rights holds that there is no moral justification for any harmful research on animals that is not to the benefit of the individual animal. Bernard Rollin argues that benefits to human beings cannot outweigh animal suffering, and that human beings have no moral right to use an animal in ways that do not benefit that individual. Donald Watson has stated that vivisection and animal experimentation "is probably the cruelest of all Man's attack on the rest of Creation." Another prominent position is that of philosopher Peter Singer, who argues that there are no grounds to include a being's species in considerations of whether their suffering is important in utilitarian moral considerations. Malcolm Macleod and collaborators argue that most controlled animal studies do not employ randomization, allocation concealment, and blinding outcome assessment, and that failure to employ these features exaggerates the apparent benefit of drugs tested in animals, leading to a failure to translate much animal research for human benefit. Governments such as the Netherlands and New Zealand have responded to the public's concerns by outlawing invasive experiments on certain classes of non-human primates, particularly the great apes. In 2015, captive chimpanzees in the U.S. were added to the Endangered Species Act adding new road blocks to those wishing to experiment on them. Similarly, citing ethical considerations and the availability of alternative research methods, the U.S. NIH announced in 2013 that it would dramatically reduce and eventually phase out experiments on chimpanzees. The British government has required that the cost to animals in an experiment be weighed against the gain in knowledge. Some medical schools and agencies in China, Japan, and South Korea have built cenotaphs for killed animals. In Japan there are also annual memorial services Ireisai () for animals sacrificed at medical school. Various specific cases of animal testing have drawn attention, including both instances of beneficial scientific research, and instances of alleged ethical violations by those performing the tests. The fundamental properties of muscle physiology were determined with work done using frog muscles (including the force generating mechanism of all muscle, the length-tension relationship, and the force-velocity curve), and frogs are still the preferred model organism due to the long survival of muscles in vitro and the possibility of isolating intact single-fiber preparations (not possible in other organisms). Modern physical therapy and the understanding and treatment of muscular disorders is based on this work and subsequent work in mice (often engineered to express disease states such as muscular dystrophy). In February 1997 a team at the Roslin Institute in Scotland announced the birth of Dolly the sheep, the first mammal to be cloned from an adult somatic cell. Concerns have been raised over the mistreatment of primates undergoing testing. In 1985, the case of Britches, a macaque monkey at the University of California, Riverside, gained public attention. He had his eyelids sewn shut and a sonar sensor on his head as part of an experiment to test sensory substitution devices for blind people. The laboratory was raided by Animal Liberation Front in 1985, removing Britches and 466 other animals. The National Institutes of Health conducted an eight-month investigation and concluded, however, that no corrective action was necessary. During the 2000s other cases have made headlines, including experiments at the University of Cambridge and Columbia University in 2002. In 2004 and 2005, undercover footage of staff of in an animal testing facility in Virginia owned by Covance (now Fortrea) was shot by People for the Ethical Treatment of Animals (PETA). Following release of the footage, the U.S. Department of Agriculture fined the company $8,720 for 16 citations, three of which involved lab monkeys; the other citations involved administrative issues and equipment. Threats to researchers Threats of violence to animal researchers are not uncommon. In 2006, a primate researcher at the University of California, Los Angeles (UCLA) shut down the experiments in his lab after threats from animal rights activists. The researcher had received a grant to use 30 macaque monkeys for vision experiments; each monkey was anesthetized for a single physiological experiment lasting up to 120 hours, and then euthanized. The researcher's name, phone number, and address were posted on the website of the Primate Freedom Project. Demonstrations were held in front of his home. A Molotov cocktail was placed on the porch of what was believed to be the home of another UCLA primate researcher; instead, it was accidentally left on the porch of an elderly woman unrelated to the university. The Animal Liberation Front claimed responsibility for the attack. As a result of the campaign, the researcher sent an email to the Primate Freedom Project stating "you win", and "please don't bother my family anymore". In another incident at UCLA in June 2007, the Animal Liberation Brigade placed a bomb under the car of a UCLA children's ophthalmologist who experiments on cats and rhesus monkeys; the bomb had a faulty fuse and did not detonate. In 1997, PETA filmed staff from Huntingdon Life Sciences, showing dogs being mistreated. The employees responsible were dismissed, with two given community service orders and ordered to pay £250 costs, the first lab technicians to have been prosecuted for animal cruelty in the UK. The Stop Huntingdon Animal Cruelty campaign used tactics ranging from non-violent protest to the alleged firebombing of houses owned by executives associated with HLS's clients and investors. The Southern Poverty Law Center, which monitors US domestic extremism, has described SHAC's modus operandi as "frankly terroristic tactics similar to those of anti-abortion extremists", and in 2005 an official with the FBI's counter-terrorism division referred to SHAC's activities in the United States as domestic terrorist threats. 13 members of SHAC were jailed for between 15 months and eleven years on charges of conspiracy to blackmail or harm HLS and its suppliers. These attacks—as well as similar incidents that caused the Southern Poverty Law Center to declare in 2002 that the animal rights movement had "clearly taken a turn toward the more extreme"—prompted the US government to pass the Animal Enterprise Terrorism Act and the UK government to add the offense of "Intimidation of persons connected with animal research organisation" to the Serious Organised Crime and Police Act 2005. Such legislation and the arrest and imprisonment of activists may have decreased the incidence of attacks. Scientific criticism Systematic reviews have pointed out that animal testing often fails to accurately mirror outcomes in humans. For instance, a 2013 review noted that some 100 vaccines have been shown to prevent HIV in animals, yet none of them have worked on humans. Effects seen in animals may not be replicated in humans, and vice versa. Many corticosteroids cause birth defects in animals, but not in humans. Conversely, thalidomide causes serious birth defects in humans, but not in some animals such as mice (however, it does cause birth defects in rabbits). A 2004 paper concluded that much animal research is wasted because systemic reviews are not used, and due to poor methodology. A 2006 review found multiple studies where there were promising results for new drugs in animals, but human clinical studies did not show the same results. The researchers suggested that this might be due to researcher bias, or simply because animal models do not accurately reflect human biology. Lack of meta-reviews may be partially to blame. Poor methodology is an issue in many studies. A 2009 review noted that many animal experiments did not use blinded experiments, a key element of many scientific studies in which researchers are not told about the part of the study they are working on to reduce bias. A 2021 paper found, in a sample of Open Access Alzheimer Disease studies, that if the authors omit from the title that the experiment was performed in mice, the News Headline follow suit, and that also the Twitter repercussion is higher. Activism There are various examples of activists utilizing Freedom of Information Act (FOIA) requests to obtain information about taxpayer funding of animal testing. For example, the White Coat Waste Project, a group of activists that hold that taxpayers should not have to pay $20 billion every year for experiments on animals, highlighted that the National Institute of Allergy and Infectious Diseases provided $400,000 in taxpayer money to fund experiments in which 28 beagles were infected by disease-causing parasites. The White Coat Project found reports that said dogs taking part in the experiments were "vocalizing in pain" after being injected with foreign substances. Following public outcry, People for the Ethical Treatment of Animals (PETA) made a call to action that all members of the National Institute of Health resign effective immediately and that there is a "need to find a new NIH director to replace the outgoing Francis Collins who will shut down research that violates the dignity of nonhuman animals." Historical debate As the experimentation on animals increased, especially the practice of vivisection, so did criticism and controversy. In 1655, the advocate of Galenic physiology Edmund O'Meara said that "the miserable torture of vivisection places the body in an unnatural state". O'Meara and others argued pain could affect animal physiology during vivisection, rendering results unreliable. There were also objections ethically, contending that the benefit to humans did not justify the harm to animals. Early objections to animal testing also came from another angle—many people believed animals were inferior to humans and so different that results from animals could not be applied to humans. On the other side of the debate, those in favor of animal testing held that experiments on animals were necessary to advance medical and biological knowledge. Claude Bernard—who is sometimes known as the "prince of vivisectors" and the father of physiology, and whose wife, Marie Françoise Martin, founded the first anti-vivisection society in France in 1883—famously wrote in 1865 that "the science of life is a superb and dazzlingly lighted hall which may be reached only by passing through a long and ghastly kitchen". Arguing that "experiments on animals are entirely conclusive for the toxicology and hygiene of man effects of these substances are the same on man as on animals, save for differences in degree", Bernard established animal experimentation as part of the standard scientific method. In 1896, the physiologist and physician Dr. Walter B. Cannon said "The antivivisectionists are the second of the two types Theodore Roosevelt described when he said, 'Common sense without conscience may lead to crime, but conscience without common sense may lead to folly, which is the handmaiden of crime. These divisions between pro- and anti-animal testing groups first came to public attention during the Brown Dog affair in the early 1900s, when hundreds of medical students clashed with anti-vivisectionists and police over a memorial to a vivisected dog. In 1822, the first animal protection law was enacted in the British parliament, followed by the Cruelty to Animals Act (1876), the first law specifically aimed at regulating animal testing. The legislation was promoted by Charles Darwin, who wrote to Ray Lankester in March 1871: "You ask about my opinion on vivisection. I quite agree that it is justifiable for proper investigations on physiology; but not for mere damnable and detestable curiosity. It is a subject which makes me sick with horror, so I will not say another word about it, else I shall not sleep to-night." In response to the lobbying by anti-vivisectionists, several organizations were set up in Britain to defend animal research: The Physiological Society was formed in 1876 to give physiologists "mutual benefit and protection", the Association for the Advancement of Medicine by Research was formed in 1882 and focused on policy-making, and the Research Defence Society (now Understanding Animal Research) was formed in 1908 "to make known the facts as to experiments on animals in this country; the immense importance to the welfare of mankind of such experiments and the great saving of human life and health directly attributable to them". Opposition to the use of animals in medical research first arose in the United States during the 1860s, when Henry Bergh founded the American Society for the Prevention of Cruelty to Animals (ASPCA), with America's first specifically anti-vivisection organization being the American AntiVivisection Society (AAVS), founded in 1883. Antivivisectionists of the era generally believed the spread of mercy was the great cause of civilization, and vivisection was cruel. However, in the USA the antivivisectionists' efforts were defeated in every legislature, overwhelmed by the superior organization and influence of the medical community. Overall, this movement had little legislative success until the passing of the Laboratory Animal Welfare Act, in 1966. Real progress in thinking about animal rights build on the "theory of justice" (1971) by the philosopher John Rawls and work on ethics by philosopher Peter Singer. Alternatives Most scientists and governments state that animal testing should cause as little suffering to animals as possible, and that animal tests should only be performed where necessary.)The "Three Rs" are guiding principles for the use of animals in research in most countries. Whilst replacement of animals, i.e. alternatives to animal testing, is one of the principles, their scope is much broader. Although such principles have been welcomed as a step forwards by some animal welfare groups, they have also been criticized as both outdated by current research, and of little practical effect in improving animal welfare. The scientists and engineers at Harvard's Wyss Institute have created "organs-on-a-chip", including the "lung-on-a-chip" and "gut-on-a-chip". Researchers at cellasys in Germany developed a "skin-on-a-chip". These tiny devices contain human cells in a 3-dimensional system that mimics human organs. The chips can be used instead of animals in in vitro disease research, drug testing, and toxicity testing. Researchers have also begun using 3-D bioprinters to create human tissues for in vitro testing. Another non-animal research method is in silico or computer simulation and mathematical modeling which seeks to investigate and ultimately predict toxicity and drug effects on humans without using animals. This is done by investigating test compounds on a molecular level using recent advances in technological capabilities with the ultimate goal of creating treatments unique to each patient. Microdosing is another alternative to the use of animals in experimentation. Microdosing is a process whereby volunteers are administered a small dose of a test compound allowing researchers to investigate its pharmacological affects without harming the volunteers. Microdosing can replace the use of animals in pre-clinical drug screening and can reduce the number of animals used in safety and toxicity testing. Additional alternative methods include positron emission tomography (PET), which allows scanning of the human brain in vivo, and comparative epidemiological studies of disease risk factors among human populations. Simulators and computer programs have also replaced the use of animals in dissection, teaching and training exercises. Official bodies such as the European Centre for the Validation of Alternative Test Methods of the European Commission, the Interagency Coordinating Committee for the Validation of Alternative Methods in the US, ZEBET in Germany, and the Japanese Center for the Validation of Alternative Methods (among others) also promote and disseminate the 3Rs. These bodies are mainly driven by responding to regulatory requirements, such as supporting the cosmetics testing ban in the EU by validating alternative methods. The European Partnership for Alternative Approaches to Animal Testing serves as a liaison between the European Commission and industries. The European Consensus Platform for Alternatives coordinates efforts amongst EU member states. Academic centers also investigate alternatives, including the Center for Alternatives to Animal Testing at the Johns Hopkins University and the NC3Rs in the UK. See also Bateson's cube Effect of psychoactive drugs on animals Human subject research Krogh's principle Microphysiometry The People's Petition Preclinical imaging Remote control animal Sentinel species Sham feeding U.S. Meat Animal Research Center Women and animal advocacy Wuzhishan pig References Citations Works cited Further reading Conn, P. Michael and Parker, James V (2008). The Animal Research War, Palgrave Macmillan, 15 Companies That Still Test on Animals in 2022. Yahoo! Finance. 9 January 2023. External links Animal test conditions Bioethics Biology experiments Comparative psychology Ethically disputed research practices towards animals Ethics and statistics Laboratory techniques Research methods
Animal testing
[ "Chemistry", "Technology" ]
15,089
[ "Bioethics", "Animal testing", "nan", "Ethics of science and technology", "Ethics and statistics", "Animal test conditions" ]
175,609
https://en.wikipedia.org/wiki/Cayley%E2%80%93Dickson%20construction
In mathematics, the Cayley–Dickson construction, sometimes also known as the Cayley–Dickson process or the Cayley–Dickson procedure produces a sequence of algebras over the field of real numbers, each with twice the dimension of the previous one. It is named after Arthur Cayley and Leonard Eugene Dickson. The algebras produced by this process are known as Cayley–Dickson algebras, for example complex numbers, quaternions, and octonions. These examples are useful composition algebras frequently applied in mathematical physics. The Cayley–Dickson construction defines a new algebra as a Cartesian product of an algebra with itself, with multiplication defined in a specific way (different from the componentwise multiplication) and an involution known as conjugation. The product of an element and its conjugate (or sometimes the square root of this product) is called the norm. The symmetries of the real field disappear as the Cayley–Dickson construction is repeatedly applied: first losing order, then commutativity of multiplication, associativity of multiplication, and finally alternativity. More generally, the Cayley–Dickson construction takes any algebra with involution to another algebra with involution of twice the dimension. Hurwitz's theorem (composition algebras) states that the reals, complex numbers, quaternions, and octonions are the only (normed) division algebras (over the real numbers). Synopsis The Cayley–Dickson construction is due to Leonard Dickson in 1919 showing how the octonions can be constructed as a two-dimensional algebra over quaternions. In fact, starting with a field F, the construction yields a sequence of F-algebras of dimension 2n. For n = 2 it is an associative algebra called a quaternion algebra, and for n = 3 it is an alternative algebra called an octonion algebra. These instances n = 1, 2 and 3 produce composition algebras as shown below. The case n = 1 starts with elements (a, b) in F × F and defines the conjugate (a, b)* to be (a*, –b) where a* = a in case n = 1, and subsequently determined by the formula. The essence of the F-algebra lies in the definition of the product of two elements (a, b) and (c, d): Proposition 1: For and the conjugate of the product is proof: Proposition 2: If the F-algebra is associative and ,then proof: + terms that cancel by the associative property. Stages in construction of real algebras Details of the construction of the classical real algebras are as follows: Complex numbers as ordered pairs The complex numbers can be written as ordered pairs of real numbers and , with the addition operator being component-wise and with multiplication defined by A complex number whose second component is zero is associated with a real number: the complex number is associated with the real number . The complex conjugate of is given by since is a real number and is its own conjugate. The conjugate has the property that which is a non-negative real number. In this way, conjugation defines a norm, making the complex numbers a normed vector space over the real numbers: the norm of a complex number  is Furthermore, for any non-zero complex number , conjugation gives a multiplicative inverse, As a complex number consists of two independent real numbers, they form a two-dimensional vector space over the real numbers. Besides being of higher dimension, the complex numbers can be said to lack one algebraic property of the real numbers: a real number is its own conjugate. Quaternions The next step in the construction is to generalize the multiplication and conjugation operations. Form ordered pairs of complex numbers and , with multiplication defined by Slight variations on this formula are possible; the resulting constructions will yield structures identical up to the signs of bases. The order of the factors seems odd now, but will be important in the next step. Define the conjugate of by These operators are direct extensions of their complex analogs: if and are taken from the real subset of complex numbers, the appearance of the conjugate in the formulas has no effect, so the operators are the same as those for the complex numbers. The product of a nonzero element with its conjugate is a non-negative real number: As before, the conjugate thus yields a norm and an inverse for any such ordered pair. So in the sense we explained above, these pairs constitute an algebra something like the real numbers. They are the quaternions, named by Hamilton in 1843. As a quaternion consists of two independent complex numbers, they form a four-dimensional vector space over the real numbers. The multiplication of quaternions is not quite like the multiplication of real numbers, though; it is not commutative – that is, if and are quaternions, it is not always true that . Octonions All the steps to create further algebras are the same from octonions onwards. This time, form ordered pairs of quaternions and , with multiplication and conjugation defined exactly as for the quaternions: Note, however, that because the quaternions are not commutative, the order of the factors in the multiplication formula becomes important—if the last factor in the multiplication formula were rather than , the formula for multiplication of an element by its conjugate would not yield a real number. For exactly the same reasons as before, the conjugation operator yields a norm and a multiplicative inverse of any nonzero element. This algebra was discovered by John T. Graves in 1843, and is called the octonions or the "Cayley numbers". As an octonion consists of two independent quaternions, they form an eight-dimensional vector space over the real numbers. The multiplication of octonions is even stranger than that of quaternions; besides being non-commutative, it is not associative – that is, if , , and are octonions, it is not always true that . For the reason of this non-associativity, octonions have no matrix representation. Sedenions The algebra immediately following the octonions is called the sedenions. It retains the algebraic property of power associativity, meaning that if is a sedenion, , but loses the property of being an alternative algebra and hence cannot be a composition algebra. Trigintaduonions The algebra immediately following the sedenions is the trigintaduonions, which form a 32-dimensional algebra over the real numbers and can be represented by blackboard bold . Further algebras The Cayley–Dickson construction can be carried on ad infinitum, at each step producing a power-associative algebra whose dimension is double that of the algebra of the preceding step. These include the 64-dimensional sexagintaquatronions (or 64-nions), the 128-dimensional centumduodetrigintanions (or 128-nions), the 256-dimensional ducentiquinquagintasexions (or 256-nions), and ad infinitum. All the algebras generated in this way over a field are quadratic: that is, each element satisfies a quadratic equation with coefficients from the field. In 1954, R. D. Schafer proved that the algebras generated by the Cayley–Dickson process over a field satisfy the flexible identity. He also proved that any derivation algebra of a Cayley–Dickson algebra is isomorphic to the derivation algebra of Cayley numbers, a 14-dimensional Lie algebra over . Modified Cayley–Dickson construction The Cayley–Dickson construction, starting from the real numbers , generates the composition algebras (the complex numbers), (the quaternions), and (the octonions). There are also composition algebras whose norm is an isotropic quadratic form, which are obtained through a slight modification, by replacing the minus sign in the definition of the product of ordered pairs with a plus sign, as follows: When this modified construction is applied to , one obtains the split-complex numbers, which are ring-isomorphic to the direct product following that, one obtains the split-quaternions, an associative algebra isomorphic to that of the 2 × 2 real matrices; and the split-octonions, which are isomorphic to . Applying the original Cayley–Dickson construction to the split-complexes also results in the split-quaternions and then the split-octonions. General Cayley–Dickson construction gave a slight generalization, defining the product and involution on for an algebra with involution (with ) to be for an additive map that commutes with and left and right multiplication by any element. (Over the reals all choices of are equivalent to −1, 0 or 1.) In this construction, is an algebra with involution, meaning: is an abelian group under has a product that is left and right distributive over has an involution , with , , . The algebra produced by the Cayley–Dickson construction is also an algebra with involution. inherits properties from unchanged as follows. If has an identity , then has an identity . If has the property that , associate and commute with all elements, then so does . This property implies that any element generates a commutative associative *-algebra, so in particular the algebra is power associative. Other properties of only induce weaker properties of : If is commutative and has trivial involution, then is commutative. If is commutative and associative then is associative. If is associative and , associate and commute with everything, then is an alternative algebra. Notes References (see p. 171) . (See "Section 2.2, The Cayley–Dickson Construction") (the following reference gives the English translation of this book) Further reading Composition algebras Historical treatment of quaternions Hypercomplex numbers
Cayley–Dickson construction
[ "Mathematics" ]
2,165
[ "Mathematical structures", "Mathematical objects", "Algebraic structures", "Hypercomplex numbers", "Numbers" ]
175,622
https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29
In physical chemistry and engineering, passivation is coating a material so that it becomes "passive", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called "fouling", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis. When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the "native oxide layer") or a nitride, that serves as a passivation layer - i.e. these metals are "self-protecting". In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. Aluminium similarly forms a stable protective oxide layer which is why it does not "rust". (In contrast, some base metals, notably iron, oxidize readily to form a rough, porous coating of rust that adheres loosely, is of higher volume than the original displaced metal, and sloughs off readily; all of which permit & promote further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years. In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to eliminating the dangling bonds and other defects that form electronic surface states, which impair performance of the devices. Surface passivation of silicon usually consists of high-temperature thermal oxidation. Mechanisms There has been much interest in determining the mechanisms that govern the increase of thickness of the oxide layer over time. Some of the important factors are the volume of oxide relative to the volume of the parent metal, the mechanism of oxygen diffusion through the metal oxide to the parent metal, and the relative chemical potential of the oxide. Boundaries between micro grains, if the oxide layer is crystalline, form an important pathway for oxygen to reach the unoxidized metal below. For this reason, vitreous oxide coatings – which lack grain boundaries – can retard oxidation. The conditions necessary, but not sufficient, for passivation are recorded in Pourbaix diagrams. Some corrosion inhibitors help the formation of a passivation layer on the surface of the metals to which they are applied. Some compounds, dissolved in solutions (chromates, molybdates) form non-reactive and low solubility films on metal surfaces. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism ("transpassivation"). History Discovery and etymology The fact that iron doesn't react with concentrated nitric acid was discovered by Mikhail Lomonosov in 1738 and rediscovered by James Keir in 1790, who also noted that such pre-immersed Fe doesn't reduce silver from nitrate anymore. In the 1830s, Michael Faraday and Christian Friedrich Schönbein studied that issue systematically and demonstrated that when a piece of iron is placed in dilute nitric acid, it will dissolve and produce hydrogen, but if the iron is placed in concentrated nitric acid and then returned to the dilute nitric acid, little or no reaction will take place. In 1836, Schönbein named the first state the active condition and the second the passive condition while Faraday proposed the modern explanation of the oxide film described above (Schönbein disagreed with it), which was experimentally proven by Ulick Richardson Evans only in 1927. Between 1955 and 1957, Carl Frosch and Lincoln Derrick discovered surface passivation of silicon wafers by silicon dioxide, using passivation to build the first silicon dioxide field effect transistors. Specific materials Aluminium Aluminium naturally forms a thin surface layer of aluminium oxide on contact with oxygen in the atmosphere through a process called oxidation, which creates a physical barrier to corrosion or further oxidation in many environments. Some aluminium alloys, however, do not form the oxide layer well, and thus are not protected against corrosion. There are methods to enhance the formation of the oxide layer for certain alloys. For example, prior to storing hydrogen peroxide in an aluminium container, the container can be passivated by rinsing it with a dilute solution of nitric acid and peroxide alternating with deionized water. The nitric acid and peroxide mixture oxidizes and dissolves any impurities on the inner surface of the container, and the deionized water rinses away the acid and oxidized impurities. Generally, there are two main ways to passivate aluminium alloys (not counting plating, painting, and other barrier coatings): chromate conversion coating and anodizing. Alclading, which metallurgically bonds thin layers of pure aluminium or alloy to different base aluminium alloy, is not strictly passivation of the base alloy. However, the aluminium layer clad on is designed to spontaneously develop the oxide layer and thus protect the base alloy. Chromate conversion coating converts the surface aluminium to an aluminium chromate coating in the range of in thickness. Aluminium chromate conversion coatings are amorphous in structure with a gel-like composition hydrated with water. Chromate conversion is a common way of passivating not only aluminium, but also zinc, cadmium, copper, silver, magnesium, and tin alloys. Anodizing is an electrolytic process that forms a thicker oxide layer. The anodic coating consists of hydrated aluminium oxide and is considered resistant to corrosion and abrasion. This finish is more robust than the other processes and also provides electrical insulation, which the other two processes may not. Carbon In carbon quantum dot (CQD) technology, CQDs are small carbon nanoparticles (less than 10 nm in size) with some form of surface passivation. Ferrous materials Ferrous materials, including steel, may be somewhat protected by promoting oxidation ("rust") and then converting the oxidation to a metalophosphate by using phosphoric acid and add further protection by surface coating. As the uncoated surface is water-soluble, a preferred method is to form manganese or zinc compounds by a process commonly known as parkerizing or phosphate conversion. Older, less effective but chemically similar electrochemical conversion coatings included black oxidizing, historically known as bluing or browning. Ordinary steel forms a passivating layer in alkali environments, as reinforcing bar does in concrete. Stainless steel Stainless steels are corrosion-resistant, but they are not completely impervious to rusting. One common mode of corrosion in corrosion-resistant steels is when small spots on the surface begin to rust because grain boundaries or embedded bits of foreign matter (such as grinding swarf) allow water molecules to oxidize some of the iron in those spots despite the alloying chromium. This is called rouging. Some grades of stainless steel are especially resistant to rouging; parts made from them may therefore forgo any passivation step, depending on engineering decisions. Common among all of the different specifications and types are the following steps: Prior to passivation, the object must be cleaned of any contaminants and generally must undergo a validating test to prove that the surface is 'clean.' The object is then placed in an acidic passivating bath that meets the temperature and chemical requirements of the method and type specified between customer and vendor. While nitric acid is commonly used as a passivating acid for stainless steel, citric acid is gaining in popularity as it is far less dangerous to handle, less toxic, and biodegradable, making disposal less of a challenge. Passivating temperatures can range from ambient to , while minimum passivation times are usually 20 to 30 minutes. After passivation, the parts are neutralized using a bath of aqueous sodium hydroxide, then rinsed with clean water and dried. The passive surface is validated using humidity, elevated temperature, a rusting agent (salt spray), or some combination of the three. The passivation process removes exogenous iron, creates/restores a passive oxide layer that prevents further oxidation (rust), and cleans the parts of dirt, scale, or other welding-generated compounds (e.g. oxides). Passivation processes are generally controlled by industry standards, the most prevalent among them today being ASTM A 967 and AMS 2700. These industry standards generally list several passivation processes that can be used, with the choice of specific method left to the customer and vendor. The "method" is either a nitric acid-based passivating bath, or a citric acid-based bath, these acids remove surface iron and rust, while sparing the chromium. The various 'types' listed under each method refer to differences in acid bath temperature and concentration. Sodium dichromate is often required as an additive to oxidise the chromium in certain 'types' of nitric-based acid baths, however this chemical is highly toxic. With citric acid, simply rinsing and drying the part and allowing the air to oxidise it, or in some cases the application of other chemicals, is used to perform the passivation of the surface. It is not uncommon for some aerospace manufacturers to have additional guidelines and regulations when passivating their products that exceed the national standard. Often, these requirements will be cascaded down using Nadcap or some other accreditation system. Various testing methods are available to determine the passivation (or passive state) of stainless steel. The most common methods for validating the passivity of a part is some combination of high humidity and heat for a period of time, intended to induce rusting. Electro-chemical testers can also be utilized to commercially verify passivation. Titanium The surface of titanium and of titanium-rich alloys oxidizes immediately upon exposure to air to form a thin passivation layer of titanium oxide, mostly titanium dioxide. This layer makes it resistant to further corrosion, aside from gradual growth of the oxide layer, thickening to ~25 nm after several years in air. This protective layer makes it suitable for use even in corrosive environments such as sea water. Titanium can be anodized to produce a thicker passivation layer. As with many other metals, this layer causes thin-film interference which makes the metal surface appear colored, with the thickness of the passivation layer directly affecting the color produced. Nickel Nickel can be used for handling elemental fluorine, owing to the formation of a passivation layer of nickel fluoride. This fact is useful in water treatment and sewage treatment applications. Silicon In the area of microelectronics and photovoltaic solar cells, surface passivation is usually implemented by thermal oxidation at about 1000 °C to form a coating of silicon dioxide. Surface passivation is critical to solar cell efficiency. The effect of passivation on the efficiency of solar cells ranges from 3–7%. The surface resistivity is high, > 100 Ωcm. Perovskite The easiest and most widely studied method to improve perovskite solar cells is passivation. These defects usually lead to deep energy level defects in solar cells due to the presence of hanging bonds on the surface of perovskite films. Usually, small molecules or polymers are doped to interact with the hanging bonds and thus reduce the defect states. This process is similar to Tetris, i.e., we always want the layer to be full. A small molecule with the function of passivation is some kind of square that can be inserted where there is an empty space and then a complete layer is obtained. These molecules will generally have lone electron pairs or pi-electrons, so they can bind to the defective states on the surface of the cell film and thus achieve passivation of the material. Therefore, molecules such as carbonyl, nitrogen-containing molecules, and sulfur-containing molecules are considered, and recently it has been shown that π electrons can also play a role. In addition, passivation not only improves the photoelectric conversion efficiency of perovskite cells, but also contributes to the improvement of device stability. For example, adding a passivation layer of a few nanometers thickness can effectively achieve passivation with the effect of stopping water vapor intrusion. See also Cold welding Deal–Grove model Pilling–Bedworth ratio References Further reading Chromate conversion coating (chemical film) per MIL-DTL-5541F for aluminium and aluminium alloy parts A standard overview on black oxide coatings is provided in MIL-HDBK-205, Phosphate & Black Oxide Coating of Ferrous Metals. Many of the specifics of Black Oxide coatings may be found in MIL-DTL-13924 (formerly MIL-C-13924). This Mil-Spec document additionally identifies various classes of Black Oxide coatings, for use in a variety of purposes for protecting ferrous metals against rust. Passivisation : Debate over Paintability http://www.coilworld.com/5-6_12/rlw3.htm Corrosion prevention Surface finishing German inventions Integrated circuits MOSFETs Semiconductor device fabrication Swiss inventions
Passivation (chemistry)
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,992
[ "Corrosion prevention", "Computer engineering", "Microtechnology", "Corrosion", "Semiconductor device fabrication", "Integrated circuits" ]
175,638
https://en.wikipedia.org/wiki/Binary%20phase
In materials chemistry, a binary phase or binary compound is a chemical compound containing two different elements. Some binary phase compounds are molecular, e.g. carbon tetrachloride (CCl4). More typically binary phase refers to extended solids. Famous examples zinc sulfide, which contains zinc and sulfur, and tungsten carbide, which contains tungsten and carbon. Phases with higher degrees of complexity feature more elements, e.g. three elements in ternary phases, four elements in quaternary phases. These phases exhibit a higher degree of complexity due to the interaction of these elements at different conditions. References Chemical compounds
Binary phase
[ "Physics", "Chemistry" ]
130
[ "Chemical compounds", "Molecules", "Matter" ]
175,641
https://en.wikipedia.org/wiki/Chelation
Chelation () is a type of bonding of ions and their molecules to metal ions. It involves the formation or presence of two or more separate coordinate bonds between a polydentate (multiple bonded) ligand and a single central metal atom. These ligands are called chelants, chelators, chelating agents, or sequestering agents. They are usually organic compounds, but this is not a necessity. The word chelation is derived from Greek χηλή, chēlē, meaning "claw"; the ligands lie around the central atom like the claws of a crab. The term chelate () was first applied in 1920 by Sir Gilbert T. Morgan and H. D. K. Drew, who stated: "The adjective chelate, derived from the great claw or chele (Greek) of the crab or other crustaceans, is suggested for the caliperlike groups which function as two associating units and fasten to the central atom so as to produce heterocyclic rings." Chelation is useful in applications such as providing nutritional supplements, in chelation therapy to remove toxic metals from the body, as contrast agents in MRI scanning, in manufacturing using homogeneous catalysts, in chemical water treatment to assist in the removal of metals, and in fertilizers. Chelate effect The chelate effect is the greater affinity of chelating ligands for a metal ion than that of similar nonchelating (monodentate) ligands for the same metal. The thermodynamic principles underpinning the chelate effect are illustrated by the contrasting affinities of copper(II) for ethylenediamine (en) vs. methylamine. In () the ethylenediamine forms a chelate complex with the copper ion. Chelation results in the formation of a five-membered CuC2N2 ring. In () the bidentate ligand is replaced by two monodentate methylamine ligands of approximately the same donor power, indicating that the Cu–N bonds are approximately the same in the two reactions. The thermodynamic approach to describing the chelate effect considers the equilibrium constant for the reaction: the larger the equilibrium constant, the higher the concentration of the complex. Electrical charges have been omitted for simplicity of notation. The square brackets indicate concentration, and the subscripts to the stability constants, β, indicate the stoichiometry of the complex. When the analytical concentration of methylamine is twice that of ethylenediamine and the concentration of copper is the same in both reactions, the concentration [Cu(en)] is much higher than the concentration [Cu(MeNH2)2] because . An equilibrium constant, K, is related to the standard Gibbs free energy, by where R is the gas constant and T is the temperature in kelvins. is the standard enthalpy change of the reaction and is the standard entropy change. Since the enthalpy should be approximately the same for the two reactions, the difference between the two stability constants is due to the effects of entropy. In equation () there are two particles on the left and one on the right, whereas in equation () there are three particles on the left and one on the right. This difference means that less entropy of disorder is lost when the chelate complex is formed with bidentate ligand than when the complex with monodentate ligands is formed. This is one of the factors contributing to the entropy difference. Other factors include solvation changes and ring formation. Some experimental data to illustrate the effect are shown in the following table. These data confirm that the enthalpy changes are approximately equal for the two reactions and that the main reason for the greater stability of the chelate complex is the entropy term, which is much less unfavorable. In general it is difficult to account precisely for thermodynamic values in terms of changes in solution at the molecular level, but it is clear that the chelate effect is predominantly an effect of entropy. Other explanations, including that of Schwarzenbach, are discussed in Greenwood and Earnshaw (loc.cit). In nature Numerous biomolecules exhibit the ability to dissolve certain metal cations. Thus, proteins, polysaccharides, and polynucleic acids are excellent polydentate ligands for many metal ions. Organic compounds such as the amino acids glutamic acid and histidine, organic diacids such as malate, and polypeptides such as phytochelatin are also typical chelators. In addition to these adventitious chelators, several biomolecules are specifically produced to bind certain metals (see next section). Virtually all metalloenzymes feature metals that are chelated, usually to peptides or cofactors and prosthetic groups. Such chelating agents include the porphyrin rings in hemoglobin and chlorophyll. Many microbial species produce water-soluble pigments that serve as chelating agents, termed siderophores. For example, species of Pseudomonas are known to secrete pyochelin and pyoverdine that bind iron. Enterobactin, produced by E. coli, is the strongest chelating agent known. The marine mussels use metal chelation, especially Fe3+ chelation with the Dopa residues in mussel foot protein-1 to improve the strength of the threads that they use to secure themselves to surfaces. In earth science, chemical weathering is attributed to organic chelating agents (e.g., peptides and sugars) that extract metal ions from minerals and rocks. Most metal complexes in the environment and in nature are bound in some form of chelate ring (e.g., with a humic acid or a protein). Thus, metal chelates are relevant to the mobilization of metals in the soil, the uptake and the accumulation of metals into plants and microorganisms. Selective chelation of heavy metals is relevant to bioremediation (e.g., removal of 137Cs from radioactive waste). Applications Animal feed additives Synthetic chelates such as ethylenediaminetetraacetic acid (EDTA) proved too stable and not nutritionally viable. If the mineral was taken from the EDTA ligand, the ligand could not be used by the body and would be expelled. During the expulsion process, the EDTA ligand randomly chelated and stripped other minerals from the body. According to the Association of American Feed Control Officials (AAFCO), a metal–amino acid chelate is defined as the product resulting from the reaction of metal ions from a soluble metal salt with amino acids, with a mole ratio in the range of 1–3 (preferably 2) moles of amino acids for one mole of metal. The average weight of the hydrolyzed amino acids must be approximately 150 and the resulting molecular weight of the chelate must not exceed 800 Da. Since the early development of these compounds, much more research has been conducted, and has been applied to human nutrition products in a similar manner to the animal nutrition experiments that pioneered the technology. Ferrous bis-glycinate is an example of one of these compounds that has been developed for human nutrition. Dental use Dentin adhesives were first designed and produced in the 1950s based on a co-monomer chelate with calcium on the surface of the tooth and generated very weak water-resistant chemical bonding (2–3 MPa). Chelation therapy Chelation therapy is an antidote for poisoning by mercury, arsenic, and lead. Chelating agents convert these metal ions into a chemically and biochemically inert form that can be excreted. Chelation using sodium calcium edetate has been approved by the U.S. Food and Drug Administration (FDA) for serious cases of lead poisoning. It is not approved for treating "heavy metal toxicity". Although beneficial in cases of serious lead poisoning, use of disodium EDTA (edetate disodium) instead of calcium disodium EDTA has resulted in fatalities due to hypocalcemia. Disodium EDTA is not approved by the FDA for any use, and all FDA-approved chelation therapy products require a prescription. Contrast agents Chelate complexes of gadolinium are often used as contrast agents in MRI scans, although iron particle and manganese chelate complexes have also been explored. Bifunctional chelate complexes of zirconium, gallium, fluorine, copper, yttrium, bromine, or iodine are often used for conjugation to monoclonal antibodies for use in antibody-based PET imaging. These chelate complexes often employ the usage of hexadentate ligands such as desferrioxamine B (DFO), according to Meijs et al., and the gadolinium complexes often employ the usage of octadentate ligands such as DTPA, according to Desreux et al. Auranofin, a chelate complex of gold, is used in the treatment of rheumatoid arthritis, and penicillamine, which forms chelate complexes of copper, is used in the treatment of Wilson's disease and cystinuria, as well as refractory rheumatoid arthritis. Nutritional advantages and issues Chelation in the intestinal tract is a cause of numerous interactions between drugs and metal ions (also known as "minerals" in nutrition). As examples, antibiotic drugs of the tetracycline and quinolone families are chelators of Fe2+, Ca2+, and Mg2+ ions. EDTA, which binds to calcium, is used to alleviate the hypercalcemia that often results from band keratopathy. The calcium may then be removed from the cornea, allowing for some increase in clarity of vision for the patient. Homogeneous catalysts are often chelated complexes. A representative example is the use of BINAP (a bidentate phosphine) in Noyori asymmetric hydrogenation and asymmetric isomerization. The latter has the practical use of manufacture of synthetic (–)-menthol. Cleaning and water softening A chelating agent is the main component of some rust removal formulations. Citric acid is used to soften water in soaps and laundry detergents. A common synthetic chelator is EDTA. Phosphonates are also well-known chelating agents. Chelators are used in water treatment programs and specifically in steam engineering. Although the treatment is often referred to as "softening", chelation has little effect on the water's mineral content, other than to make it soluble and lower the water's pH level. Fertilizers Metal chelate compounds are common components of fertilizers to provide micronutrients. These micronutrients (manganese, iron, zinc, copper) are required for the health of the plants. Most fertilizers contain phosphate salts that, in the absence of chelating agents, typically convert these metal ions into insoluble solids that are of no nutritional value to the plants. EDTA is the typical chelating agent that keeps these metal ions in a soluble form. Economic situation Because of their wide needs, the overall chelating agents growth was 4% annually during 2009–2014 and the trend is likely to increase. Aminopolycarboxylic acids chelators are the most widely consumed chelating agents; however, the percentage of the greener alternative chelators in this category continues to grow. The consumption of traditional aminopolycarboxylates chelators, in particular the EDTA (ethylenediaminetetraacetic acid) and NTA (nitrilotriacetic acid), is declining (−6% annually), because of the persisting concerns over their toxicity and negative environmental impact. In 2013, these greener alternative chelants represented approximately 15% of the total aminopolycarboxylic acids demand. This is expected to rise to around 21% by 2018, replacing and aminophosphonic acids used in cleaning applications. Examples of some Greener alternative chelating agents include ethylenediamine disuccinic acid (EDDS), polyaspartic acid (PASA), methylglycinediacetic acid (MGDA), glutamic diacetic acid (L-GLDA), citrate, gluconic acid, amino acids, plant extracts etc. Reversal Dechelation (or de-chelation) is a reverse process of the chelation in which the chelating agent is recovered by acidifying solution with a mineral acid to form a precipitate. See also References External links Coordination chemistry Equilibrium chemistry
Chelation
[ "Chemistry" ]
2,656
[ "Equilibrium chemistry", "Chelating agents", "Coordination chemistry", "Process chemicals" ]
175,716
https://en.wikipedia.org/wiki/Electrical%20element
In electrical engineering, electrical elements are conceptual abstractions representing idealized electrical components, such as resistors, capacitors, and inductors, used in the analysis of electrical networks. All electrical networks can be analyzed as multiple electrical elements interconnected by wires. Where the elements roughly correspond to real components, the representation can be in the form of a schematic diagram or circuit diagram. This is called a lumped-element circuit model. In other cases, infinitesimal elements are used to model the network in a distributed-element model. These ideal electrical elements represent actual, physical electrical or electronic components. Still, they do not exist physically and are assumed to have ideal properties. In contrast, actual electrical components have less than ideal properties, a degree of uncertainty in their values, and some degree of nonlinearity. To model the nonideal behavior of a real circuit component may require a combination of multiple ideal electrical elements to approximate its function. For example, an inductor circuit element is assumed to have inductance but no resistance or capacitance, while a real inductor, a coil of wire, has some resistance in addition to its inductance. This may be modeled by an ideal inductance element in series with a resistance. Circuit analysis using electric elements is useful for understanding practical networks of electrical components. Analyzing how a network is affected by its individual elements makes it possible to estimate how a real network will behave. Types Circuit elements can be classified into different categories. One is how many terminals they have to connect them to other components: One-port elements represent the simplest components, with only two terminals to connect to. Examples are resistances, capacitances, inductances, and diodes. Two-port elements are the most common multiport elements with four terminals consisting of two ports. Multiport elementsthese have more than two terminals. They connect to the external circuit through multiple pairs of terminals called ports. For example, a transformer with three separate windings has six terminals and could be idealized as a three-port element; the ends of each winding are connected to a pair of terminals representing a port. Elements can also be divided into active and passive: Passive elementsThese elements do not have a source of energy; examples are diodes, resistances, capacitances, and inductances. Active elements or sourcesthese are elements which can source electrical power. They can be used to represent ideal batteries and power supplies; examples are voltage sources and current sources. Dependent sourcesThese are two-port elements with a voltage or current source proportional to the voltage or current at a second pair of terminals. These are used in the modelling of amplifying components such as transistors, vacuum tubes, and op-amps. Another distinction is between linear and nonlinear: Linear elementsthese are elements in which the constituent relation, the relation between voltage and current, is a linear function. They obey the superposition principle. Examples of linear elements are resistances, capacitances, inductances, and linear-dependent sources. Circuits with only linear elements, linear circuits, do not cause intermodulation distortion and can be easily analysed with powerful mathematical techniques such as the Laplace transform. Nonlinear elementsthese are elements in which the relation between voltage and current is a nonlinear function. An example is a diode, where the current is an exponential function of the voltage. Circuits with nonlinear elements are harder to analyse and design, often requiring circuit simulation computer programs such as SPICE. One-port elements Only nine types of element (memristor not included), five passive and four active, are required to model any electrical component or circuit. Each element is defined by a relation between the state variables of the network: current, ; voltage, ; charge, ; and magnetic flux, . Two sources: Current source, measured in amperes – produces a current in a conductor. Affects charge according to the relation . Voltage source, measured in volts – produces a potential difference between two points. Affects magnetic flux according to the relation . in this relationship does not necessarily represent anything physically meaningful. In the case of the current generator, , the time integral of current represents the quantity of electric charge physically delivered by the generator. Here is the time integral of voltage, but whether or not that represents a physical quantity depends on the nature of the voltage source. For a voltage generated by magnetic induction, it is meaningful, but for an electrochemical source, or a voltage that is the output of another circuit, no physical meaning is attached to it. Both these elements are necessarily non-linear elements. See #Non-linear elements below. Three passive elements: Resistance , measured in ohms – produces a voltage proportional to the current flowing through the element. Relates voltage and current according to the relation . Capacitance , measured in farads – produces a current proportional to the rate of change of voltage across the element. Relates charge and voltage according to the relation . Inductance , measured in henries – produces the magnetic flux proportional to the rate of change of current through the element. Relates flux and current according to the relation . Four abstract active elements: Voltage-controlled voltage source (VCVS) Generates a voltage based on another voltage with respect to a specified gain. (has infinite input impedance and zero output impedance). Voltage-controlled current source (VCCS) Generates a current based on a voltage elsewhere in the circuit, with respect to a specified gain, used to model field-effect transistors and vacuum tubes (has infinite input impedance and infinite output impedance). The gain is characterised by a transfer conductance which will have units of siemens. Current-controlled voltage source (CCVS) Generates a voltage based on an input current elsewhere in the circuit with respect to a specified gain. (has zero input impedance and zero output impedance). Used to model trancitors. The gain is characterised by a transfer impedance which will have units of ohms. Current-controlled current source (CCCS) Generates a current based on an input current and a specified gain. Used to model bipolar junction transistors. (Has zero input impedance and infinite output impedance). These four elements are examples of two-port elements. Non-linear elements In reality, all circuit components are non-linear and can only be approximated as linear over a certain range. To describe the passive elements more precisely, their constitutive relation is used instead of simple proportionality. Six constitutive relations can be formed from any two of the circuit variables. From this, there is supposed to be a theoretical fourth passive element since there are only five elements in total (not including the various dependent sources) found in linear network analysis. This additional element is called memristor. It only has any meaning as a time-dependent non-linear element; as a time-independent linear element, it reduces to a regular resistor. Hence, it is not included in linear time-invariant (LTI) circuit models. The constitutive relations of the passive elements are given by; Resistance: constitutive relation defined as . Capacitance: constitutive relation defined as . Inductance: constitutive relation defined as . Memristance: constitutive relation defined as . where is an arbitrary function of two variables. In some special cases, the constitutive relation simplifies to a function of one variable. This is the case for all linear elements, but also, for example, an ideal diode, which in circuit theory terms is a non-linear resistor, has a constitutive relation of the form . Both independent voltage and independent current sources can be considered non-linear resistors under this definition. The fourth passive element, the memristor, was proposed by Leon Chua in a 1971 paper, but a physical component demonstrating memristance was not created until thirty-seven years later. It was reported on April 30, 2008, that a working memristor had been developed by a team at HP Labs led by scientist R. Stanley Williams. With the advent of the memristor, each pairing of the four variables can now be related. Two special non-linear elements are sometimes used in analysis but are not the ideal counterpart of any real component: Nullator: defined as Norator: defined as an element that places no restrictions on voltage and current whatsoever. These are sometimes used in models of components with more than two terminals: transistors, for instance. Two-port elements All the above are two-terminal, or one-port, elements except the dependent sources. Two lossless, passive, linear two-port elements are typically introduced into network analysis. Their constitutive relations in matrix notation are; Transformer Gyrator The transformer maps a voltage at one port to a voltage at the other in a ratio of n. The current between the same two ports is mapped by 1/n. On the other hand, the gyrator maps a voltage at one port to a current at the other. Likewise, currents are mapped to voltages. The quantity r in the matrix is in units of resistance. The gyrator is a necessary element in analysis because it is not reciprocal. Networks built from just the basic linear elements are necessarily reciprocal, so they cannot be used by themselves to represent a non-reciprocal system. It is not essential, however, to have both the transformer and gyrator. Two gyrators in cascade are equivalent to a transformer, but the transformer is usually retained for convenience. The introduction of the gyrator also makes either capacitance or inductance non-essential since a gyrator terminated with one of these at port 2 will be equivalent to the other at port 1. However, transformer, capacitance, and inductance are normally retained in analysis because they are the ideal properties of the basic physical components transformer, inductor, and capacitor, whereas a practical gyrator must be constructed as an active circuit. Examples The following are examples of representations of components by way of electrical elements. On a first degree of approximation, a battery is represented by a voltage source. A more refined model also includes a resistance in series with the voltage source to represent the battery's internal resistance (which results in the battery heating and the voltage dropping when in use). A current source in parallel may be added to represent its leakage (which discharges the battery over a long period). On a first degree of approximation, a resistor is represented by a resistance. A more refined model also includes a series inductance to represent the effects of its lead inductance (resistors constructed as a spiral have more significant inductance). A capacitance in parallel may be added to represent the capacitive effect of the proximity of the resistor leads to each other. A wire can be represented as a low-value resistor. Current sources are often used when representing semiconductors. For example, on a first degree of approximation, a bipolar transistor may be represented by a variable current source controlled by the input current. See also Transmission line References Electrical circuits Electrical systems ar:عنصر كهربائي
Electrical element
[ "Physics", "Engineering" ]
2,333
[ "Electrical systems", "Physical systems", "Electronic engineering", "Electrical engineering", "Electrical circuits" ]
175,719
https://en.wikipedia.org/wiki/Suxamethonium%20chloride
Suxamethonium chloride (brand names Scoline and Sucostrin, among others), also known as suxamethonium or succinylcholine, or simply sux in medical abbreviation, is a medication used to cause short-term paralysis as part of general anesthesia. This is done to help with tracheal intubation or electroconvulsive therapy. It is administered by injection, either into a vein or into a muscle. When used in a vein, onset of action is generally within one minute and effects last for up to 10 minutes. Common side effects include low blood pressure, increased saliva production, muscle pain, and rash. Serious side effects include malignant hyperthermia, hyperkalemia and allergic reactions. It is not recommended in people who are at risk of high blood potassium or a history of myopathy. Use during pregnancy appears to be safe for the baby. Suxamethonium is in the neuromuscular blocker family of medications and is of the depolarizing type. It works by blocking the action of acetylcholine on skeletal muscles. Suxamethonium was described as early as 1906 and came into medical use in 1951. It is on the World Health Organization's List of Essential Medicines. Suxamethonium is available as a generic medication. Medical uses Succinylcholine chloride injection is indicated, in addition to general anesthesia, to facilitate tracheal intubation and to provide skeletal muscle relaxation during surgery or mechanical ventilation. Its medical uses are limited to short-term muscle relaxation in anesthesia and intensive care, usually for facilitation of endotracheal intubation. It is popular in emergency medicine due to its rapid onset and brief duration of action. The former is a major point of consideration in the context of trauma care, where endotracheal intubation may need to be completed very quickly. The latter means that, should attempts at endotracheal intubation fail and the person cannot be ventilated, there is a prospect for neuromuscular recovery and the onset of spontaneous breathing before low blood oxygen levels occurs. It may be better than rocuronium in people without contraindications due to its faster onset of action and shorter duration of action. Suxamethonium is also commonly used as the sole muscle relaxant during electroconvulsive therapy, favoured for its short duration of action. Suxamethonium is quickly degraded by plasma butyrylcholinesterase and the duration of effect is usually in the range of a few minutes. When plasma levels of butyrylcholinesterase are greatly diminished or an atypical form is present (an otherwise harmless inherited disorder), paralysis may last much longer, as is the case in liver failure or in neonates. The vials are usually stored at a temperature between 2–8 °C, but issues have been reported with lower storage temperatures. The multi-dose vials are stable for up to 14 days at room temperature without significant loss of potency. Unless otherwise indicated in the prescribing information, room temperature for storage of medications is . Side effects Side effects include malignant hyperthermia, muscle pains, acute rhabdomyolysis with high blood levels of potassium, transient ocular hypertension, constipation and changes in cardiac rhythm, including slow heart rate, and cardiac arrest. In people with neuromuscular disease or burns, an injection of suxamethonium can lead to a large release of potassium from skeletal muscles, potentially resulting in cardiac arrest. Conditions having susceptibility to suxamethonium-induced high blood potassium are burns, closed head injury, acidosis, Guillain–Barré syndrome, cerebral stroke, drowning, severe intra-abdominal sepsis, massive trauma, myopathy, and tetanus. Suxamethonium does not produce unconsciousness or anesthesia, and its effects may cause considerable psychological distress while simultaneously making it impossible for a patient to communicate. Therefore, administration of the drug to a conscious patient is contraindicated. Hyperkalemia The side effect of high blood potassium may occur because the acetylcholine receptor is propped open, allowing continued flow of potassium ions into the extracellular fluid. A typical increase of potassium ion serum concentration on administration of suxamethonium is 0.5 mmol per liter.The increase is transient in otherwise healthy patients. The normal range of potassium is 3.5 to 5 mEq per liter. High blood potassium does not generally result in adverse effects below a concentration of 6.5 to 7 mEq per liter. Therefore, the increase in serum potassium level is usually not catastrophic in otherwise healthy patients. Severely high blood levels of potassium can cause changes in cardiac electrophysiology, which, if severe, can result in arrhythmias and even cardiac arrest. Malignant hyperthermia Malignant hyperthermia (MH) from suxamethonium administration can result in a drastic and uncontrolled increase in skeletal muscle oxidative metabolism. This overwhelms the body's capacity to supply oxygen, remove carbon dioxide, and regulate body temperature, eventually leading to circulatory collapse and death if not treated quickly. Susceptibility to malignant hyperthermia is often inherited as an autosomal dominant disorder, for which there are at least six genetic loci of interest, the most prominent being the ryanodine receptor gene (RYR1). MH susceptibility is phenotype and genetically related to central core disease (CCD), an autosomal dominant disorder characterized both by MH symptoms and by myopathy. MH is usually unmasked by anesthesia, or when a family member develops the symptoms. There is no simple, straightforward test to diagnose the condition. When MH develops during a procedure, treatment with dantrolene sodium is usually initiated; dantrolene and the avoidance of suxamethonium administration in susceptible people have markedly reduced the mortality from this condition. Apnea The normal short duration of action of suxamethonium is due to the rapid metabolism of the drug by non-specific plasma cholinesterases. However, plasma cholinesterase activity is reduced in some people due to either genetic variation or acquired conditions, which results in a prolonged duration of neuromuscular block. Genetically, ninety six percent of the population have a normal (Eu:Eu) genotype and block duration; however, some people have atypical genes (Ea, Es, Ef) which can be found in varying combinations with the Eu gene, or other atypical genes (see Pseudocholinesterase deficiency). Such genes will result in a longer duration of action of the drug, ranging from 20 minutes up to several hours. Acquired factors that affect plasma cholinesterase activity include pregnancy, liver disease, kidney failure, heart failure, thyrotoxicosis, and cancer, as well as a number of other drugs. If unrecognized by a clinician it could lead to awareness if anesthesia is discontinued whilst still paralyzed or hypoxemia (and potentially fatal consequences) if artificial ventilation is not maintained. Normal treatment is to maintain sedation and ventilate the patient on an intensive care unit until muscle function has returned. Blood testing for cholinesterase function can be performed. Mivacurium, a non-depolarizing neuromuscular blocking drug, is also metabolized via the same route with a similar clinical effect in patients deficient in plasma cholinesterase activity. Deliberate induction of conscious apnea using this drug led to its use as a form of aversion therapy in the 1960s and 1970s in some prison and institutional settings. This use was discontinued after negative publicity concerning the terrifying effects on subjects of this treatment and ethical questions about the punitive use of painful aversion. Mechanism of action There are two phases to the blocking effect of suxamethonium. Phase 1 block Phase 1 blocking has the principal paralytic effect. Binding of suxamethonium to the nicotinic acetylcholine receptor results in opening of the receptor's monovalent cation channel; a disorganized depolarization of the motor end-plate occurs and calcium is released from the sarcoplasmic reticulum. In normal skeletal muscle, acetylcholine dissociates from the receptor following depolarization and is rapidly hydrolyzed by acetylcholinesterase. The muscle cell is then ready for the next signal. Suxamethonium has a longer duration of effect than acetylcholine, and is not hydrolyzed by acetylcholinesterase. By maintaining the membrane potential above threshold, it does not allow the muscle cell to repolarize. When acetylcholine binds to an already depolarized receptor, it cannot cause further depolarization. Calcium is removed from the muscle cell cytoplasm independent of repolarization (depolarization signaling and muscle contraction are independent processes). As the calcium is taken up by the sarcoplasmic reticulum, the muscle relaxes. This explains muscle flaccidity rather than tetany following fasciculations. The results are membrane depolarization and transient fasciculations, followed by flaccid paralysis. Phase 2 block While this phase is not abnormal and is a part of its mechanism of action, it is undesirable during surgery, due to the inability to depolarize the cell again. Often, patients must be on a ventilator for hours if Phase 2 block occurs. It generally occurs when suxamethonium is administered multiple times, or during an infusion occurring over too much time, but can also occur during an initial bolus if the plasma cholinesterase is abnormal Desensitization may occur at the nerve terminal causing the myocyte to become less sensitive to acetylcholine, resulting in the membrane repolarizing and being unable be depolarized again for a period of time. Chemistry Suxamethonium is an odorless, white crystalline substance. Aqueous solutions have a pH of about 4. The dihydrate melts at 160 °C, whereas the anhydrous melts at 190 °C. It is highly soluble in water (1 gram in about 1 mL), soluble in ethyl alcohol (1 gram in about 350 mL), slightly soluble in chloroform, and practically insoluble in ether. Suxamethonium is a hygroscopic compound. The compound consists of two acetylcholine molecules that are linked by their acetyl groups. It can also be viewed as a central moiety of succinic acid with two choline moieties, one on each end. History Suxamethonium was first discovered in 1906 by Reid Hunt and René de M. Taveau. When studying the drug, animals were given curare and thus they missed the neuromuscular blocking properties of suxamethonium. Instead in 1949 an Italian group led by Daniel Bovet was first to describe succinylcholine induced paralysis. The clinical introduction of suxamethonium was described in 1951 by several groups. Papers published by Stephen Thesleff and Otto von Dardel in Sweden are important but also to be mentioned is work by Bruck, Mayrhofer and Hassfurther in Austria, Scurr and Bourne in UK, and Foldes in America. Abuse Dubai authorities declared that the assassination of Mahmoud Al-Mabhouh, a Hamas operative, was carried out on their soil by Mossad agents with the use of suxamethonium chloride injection. Entering Dubai under false passports in 2010, the Mossad agents found al-Mabhouh at a hotel, immobilized him with the drug, electrocuted him, and suffocated him with a pillow. A high concentration of suxamethonium chloride was found in al-Mabhouh's body post-mortem. The incident triggered significant diplomatic crises in the Middle East, Europe, and Australia. Brand names It is available in German-speaking countries under the trade name Lysthenon among others. Use in animals It is sometimes used in combination with pain medications and sedatives for euthanasia and immobilization of horses. References Chemical substances for emergency medicine Chlorides Choline esters Lethal injection components Muscle relaxants Neuromuscular blockers Nicotinic agonists Quaternary ammonium compounds Succinate esters Wikipedia medicine articles ready to translate World Health Organization essential medicines
Suxamethonium chloride
[ "Chemistry" ]
2,641
[ "Chlorides", "Inorganic compounds", "Chemical substances for emergency medicine", "Salts", "Chemicals in medicine" ]
175,722
https://en.wikipedia.org/wiki/Boiler
A boiler is a closed vessel in which fluid (generally water) is heated. The fluid does not necessarily boil. The heated or vaporized fluid exits the boiler for use in various processes or heating applications, including water heating, central heating, boiler-based power generation, cooking, and sanitation. Heat sources In a fossil fuel power plant using a steam cycle for power generation, the primary heat source will be combustion of coal, oil, or natural gas. In some cases byproduct fuel such as the carbon monoxide rich offgasses of a coke battery can be burned to heat a boiler; biofuels such as bagasse, where economically available, can also be used. In a nuclear power plant, boilers called steam generators are heated by the heat produced by nuclear fission. Where a large volume of hot gas is available from some process, a heat recovery steam generator or recovery boiler can use the heat to produce steam, with little or no extra fuel consumed; such a configuration is common in a combined cycle power plant where a gas turbine and a steam boiler are used. In all cases the combustion product waste gases are separate from the working fluid of the steam cycle, making these systems examples of external combustion engines. Materials The pressure vessel of a boiler is usually made of steel (or alloy steel), or historically of wrought iron. Stainless steel, especially of the austenitic types, is not used in wetted parts of boilers due to corrosion and stress corrosion cracking. However, ferritic stainless steel is often used in superheater sections that will not be exposed to boiling water, and electrically-heated stainless steel shell boilers are allowed under the European "Pressure Equipment Directive" for production of steam for sterilizers and disinfectors. In live steam models, copper or brass is often used because it is more easily fabricated in smaller size boilers. Historically, copper was often used for fireboxes (particularly for steam locomotives), because of its better formability and higher thermal conductivity; however, in more recent times, the high price of copper often makes this an uneconomic choice and cheaper substitutes (such as steel) are used instead. For much of the Victorian "age of steam", the only material used for boilermaking was the highest grade of wrought iron, with assembly by riveting. This iron was often obtained from specialist ironworks, such as those in the Cleator Moor (UK) area, noted for the high quality of their rolled plate, which was especially suitable for use in critical applications such as high-pressure boilers. In the 20th century, design practice moved towards the use of steel, with welded construction, which is stronger and cheaper, and can be fabricated more quickly and with less labour. Wrought iron boilers corrode far more slowly than their modern-day steel counterparts, and are less susceptible to localized pitting and stress-corrosion. That makes the longevity of older wrought-iron boilers far superior to that of welded steel boilers. Cast iron may be used for the heating vessel of domestic water heaters. Although such heaters are usually termed "boilers" in some countries, their purpose is usually to produce hot water, not steam, and so they run at low pressure and try to avoid boiling. The brittleness of cast iron makes it impractical for high-pressure steam boilers. Energy The source of heat for a boiler is combustion of any of several fuels, such as wood, coal, oil, or natural gas. Electric steam boilers use resistance- or immersion-type heating elements. Nuclear fission is also used as a heat source for generating steam, either directly (BWR) or, in most cases, in specialised heat exchangers called "steam generators" (PWR). Heat recovery steam generators (HRSGs) use the heat rejected from other processes such as gas turbine. Boiler efficiency There are two methods to measure the boiler efficiency in the ASME performance test code (PTC) for boilers ASME PTC 4 and for HRSG ASME PTC 4.4 and EN 12952-15 for water tube boilers: Input-output method (direct method) Heat-loss method (indirect method) Input-output method (or, direct method) Direct method of boiler efficiency test is more usable or more common. Boiler efficiency = power out / power in = Q × (Hg − Hf) / (q × GCV) × 100% where Q, rate of steam flow in kg/h Hg, enthalpy of saturated steam in kcal/kg Hf, enthalpy of feed water in kcal/kg q, rate of fuel use in kg/h GCV, gross calorific value in kcal/kg (e.g., pet coke 8200kcal/kg) Heat-loss method (or, indirect method) To measure the boiler efficiency in indirect method, parameter like these are needed: Ultimate analysis of fuel (H2, S2, S, C, moisture constraint, ash constraint) Percentage of O2 or CO2 at flue gas Flue gas temperature at outlet Ambient temperature in °C and humidity of air in kg/kg GCV of fuel in kcal/kg Ash percentage in combustible fuel GCV of ash in kcal/kg Configurations Boilers can be classified into the following configurations: Pot boiler or Haycock boiler/Haystack boiler A primitive "kettle" where a fire heats a partially filled water container from below. 18th century Haycock boilers generally produced and stored large volumes of very low-pressure steam, often hardly above that of the atmosphere. These could burn wood or most often, coal. Efficiency was very low. Flued boiler With one or two large flues—an early type or forerunner of fire-tube boiler. Fire-tube boiler Here, water partially fills a boiler barrel with a small volume left above to accommodate the steam (steam space). This is the type of boiler used in nearly all steam locomotives. The heat source is inside a furnace or firebox that has to be kept permanently surrounded by the water in order to maintain the temperature of the heating surface below the boiling point. The furnace can be situated at one end of a fire-tube which lengthens the path of the hot gases, thus augmenting the heating surface which can be further increased by making the gases reverse direction through a second parallel tube or a bundle of multiple tubes (two-pass or return flue boiler); alternatively, the gases may be taken along the sides and then beneath the boiler through flues (3-pass boiler). In case of a locomotive-type boiler, a boiler barrel extends from the firebox and the hot gases pass through a bundle of fire tubes inside the barrel which greatly increases the heating surface compared to a single tube and further improves heat transfer. Fire-tube boilers usually have a comparatively low rate of steam production, but high steam storage capacity. Fire-tube boilers mostly burn solid fuels, but are readily adaptable to those of the liquid or gas variety. Fire-tube boilers may also be referred to as "scotch-marine" or "marine" type boilers. Water-tube boiler In this type, tubes filled with water are arranged inside a furnace in a number of possible configurations. Often the water tubes connect large drums, the lower ones containing water and the upper ones steam and water; in other cases, such as a mono-tube boiler, water is circulated by a pump through a succession of coils. This type generally gives high steam production rates, but less storage capacity than the above. Water tube boilers can be designed to exploit any heat source and are generally preferred in high-pressure applications since the high-pressure water/steam is contained within small diameter pipes which can withstand the pressure with a thinner wall. These boilers are commonly constructed in place, roughly square in shape, and can be multiple stories tall. Flash boiler A flash boiler is a specialized type of water-tube boiler in which tubes are close together and water is pumped through them. A flash boiler differs from the type of mono-tube steam generator in which the tube is permanently filled with water. In a flash boiler, the tube is kept so hot that the water feed is quickly flashed into steam and superheated. Flash boilers had some use in automobiles in the 19th century and this use continued into the early 20th century. Fire-tube boiler with water-tube firebox Sometimes the two above types have been combined in the following manner: the firebox contains an assembly of water tubes, called thermic siphons. The gases then pass through a conventional firetube boiler. Water-tube fireboxes were installed in many Hungarian locomotives, but have met with little success in other countries. Sectional boiler In a cast iron sectional boiler, sometimes called a "pork chop boiler" the water is contained inside cast iron sections. These sections are assembled on site to create the finished boiler. Safety To define and secure boilers safely, some professional specialized organizations such as the American Society of Mechanical Engineers (ASME) develop standards and regulation codes. For instance, the ASME Boiler and Pressure Vessel Code is a standard providing a wide range of rules and directives to ensure compliance of the boilers and other pressure vessels with safety, security and design standards. Historically, boilers were a source of many serious injuries and property destruction due to poorly understood engineering principles. Thin and brittle metal shells can rupture, while poorly welded or riveted seams could open up, leading to a violent eruption of the pressurized steam. When water is converted to steam it expands to over 1,000 times its original volume and travels down steam pipes at over . Because of this, steam is an efficient method of moving energy and heat around a site from a central boiler house to where it is needed, but without the right boiler feedwater treatment, a steam-raising plant will suffer from scale formation and corrosion. At best, this increases energy costs and can lead to poor quality steam, reduced efficiency, shorter plant life and unreliable operation. At worst, it can lead to catastrophic failure and loss of life. Collapsed or dislodged boiler tubes can also spray scalding-hot steam and smoke out of the air intake and firing chute, injuring the firemen who load the coal into the fire chamber. Extremely large boilers providing hundreds of horsepower to operate factories can potentially demolish entire buildings. A boiler that has a loss of feed water and is permitted to boil dry can be extremely dangerous. If feed water is then sent into the empty boiler, the small cascade of incoming water instantly boils on contact with the superheated metal shell and leads to a violent explosion that cannot be controlled even by safety steam valves. Draining of the boiler can also happen if a leak occurs in the steam supply lines that is larger than the make-up water supply could replace. The Hartford Loop was invented in 1919 by the Hartford Steam Boiler Inspection and Insurance Company as a method to help prevent this condition from occurring, and thereby reduce their insurance claims. Superheated steam boiler When water is boiled the result is saturated steam, also referred to as "wet steam." Saturated steam, while mostly consisting of water vapor, carries some unevaporated water in the form of droplets. Saturated steam is useful for many purposes, such as cooking, heating and sanitation, but is not desirable when steam is expected to convey energy to machinery, such as a ship's propulsion system or the "motion" of a steam locomotive. This is because unavoidable temperature and/or pressure loss that occurs as steam travels from the boiler to the machinery will cause some condensation, resulting in liquid water being carried into the machinery. The water entrained in the steam may damage turbine blades or in the case of a reciprocating steam engine, may cause serious mechanical damage due to hydrostatic lock. Superheated steam boilers evaporate the water and then further heat the steam in a superheater, causing the discharged steam temperature to be substantially above the boiling temperature at the boiler's operating pressure. As the resulting "dry steam" is much hotter than needed to stay in the vaporous state it will not contain any significant unevaporated water. Also, higher steam pressure will be possible than with saturated steam, enabling the steam to carry more energy. Although superheating adds more energy to the steam in the form of heat there is no effect on pressure, which is determined by the rate at which steam is drawn from the boiler and the pressure settings of the safety valves. The fuel consumption required to generate superheated steam is greater than that required to generate an equivalent volume of saturated steam. However, the overall energy efficiency of the steam plant (the combination of boiler, superheater, piping and machinery) generally will be improved enough to more than offset the increased fuel consumption. Superheater operation is similar to that of the coils on an air conditioning unit, although for a different purpose. The steam piping is directed through the flue gas path in the boiler furnace, an area in which the temperature is typically between . Some superheaters are radiant type, which as the name suggests, they absorb heat by radiation. Others are convection type, absorbing heat from a fluid. Some are a combination of the two types. Through either method, the extreme heat in the flue gas path will also heat the superheater steam piping and the steam within. The design of any superheated steam plant presents several engineering challenges due to the high working temperatures and pressures. One consideration is the introduction of feedwater to the boiler. The pump used to charge the boiler must be able to overcome the boiler's operating pressure, else water will not flow. As a superheated boiler is usually operated at high pressure, the corresponding feedwater pressure must be even higher, demanding a more robust pump design. Another consideration is safety. High pressure, superheated steam can be extremely dangerous if it unintentionally escapes. To give the reader some perspective, the steam plants used in many U.S. Navy destroyers built during World War II operated at pressure and superheat. In the event of a major rupture of the system, an ever-present hazard in a warship during combat, the enormous energy release of escaping superheated steam, expanding to more than 1600 times its confined volume, would be equivalent to a cataclysmic explosion, whose effects would be exacerbated by the steam release occurring in a confined space, such as a ship's engine room. Also, small leaks that are not visible at the point of leakage could be lethal if an individual were to step into the escaping steam's path. Hence designers endeavor to give the steam-handling components of the system as much strength as possible to maintain integrity. Special methods of coupling steam pipes together are used to prevent leaks, with very high pressure systems employing welded joints to avoided leakage problems with threaded or gasketed connections. Supercritical steam generator Supercritical steam generators are frequently used for the production of electric power. They operate at supercritical pressure. In contrast to a "subcritical boiler", a supercritical steam generator operates at such a high pressure (over ) that the physical turbulence that characterizes boiling ceases to occur; the fluid is neither liquid nor gas but a super-critical fluid. There is no generation of steam bubbles within the water, because the pressure is above the critical pressure point at which steam bubbles can form. As the fluid expands through the turbine stages, its thermodynamic state drops below the critical point as it does work turning the turbine which turns the electrical generator from which power is ultimately extracted. The fluid at that point may be a mix of steam and liquid droplets as it passes into the condenser. This results in slightly less fuel use and therefore less greenhouse gas production. The term "boiler" should not be used for a supercritical pressure steam generator, as no "boiling" occurs in this device. Accessories Boiler fittings and accessories Pressuretrols to control the steam pressure in the boiler. Boilers generally have 2 or 3 pressuretrols: a manual-reset pressuretrol, which functions as a safety by setting the upper limit of steam pressure, the operating pressuretrol, which controls when the boiler fires to maintain pressure, and for boilers equipped with a modulating burner, a modulating pressuretrol which controls the amount of fire. Safety valve: It is used to relieve pressure and prevent possible explosion of a boiler. Water level indicators: They show the operator the level of fluid in the boiler, also known as a sight glass, water gauge or water column. Bottom blowdown valves: They provide a means for removing solid particulates that condense and lie on the bottom of a boiler. As the name implies, this valve is usually located directly on the bottom of the boiler, and is occasionally opened to use the pressure in the boiler to push these particulates out. Continuous blowdown valve: This allows a small quantity of water to escape continuously. Its purpose is to prevent the water in the boiler becoming saturated with dissolved salts. Saturation would lead to foaming and cause water droplets to be carried over with the steam – a condition known as priming. Blowdown is also often used to monitor the chemistry of the boiler water. Trycock: a type of valve that is often used to manually check a liquid level in a tank. Most commonly found on a water boiler. Flash tank: High-pressure blowdown enters this vessel where the steam can 'flash' safely and be used in a low-pressure system or be vented to atmosphere while the ambient pressure blowdown flows to drain. Automatic blowdown/continuous heat recovery system: This system allows the boiler to blowdown only when makeup water is flowing to the boiler, thereby transferring the maximum amount of heat possible from the blowdown to the makeup water. No flash tank is generally needed as the blowdown discharged is close to the temperature of the makeup water. Hand holes: They are steel plates installed in openings in "header" to allow for inspections & installation of tubes and inspection of internal surfaces. Steam drum internals, a series of screen, scrubber & cans (cyclone separators). Low-water cutoff: It is a mechanical means (usually a float switch) or an electrode with a safety switch that is used to turn off the burner or shut off fuel to the boiler to prevent it from running once the water goes below a certain point. If a boiler is "dry-fired" (burned without water in it) it can cause rupture or catastrophic failure. Surface blowdown line: It provides a means for removing foam or other lightweight non-condensible substances that tend to float on top of the water inside the boiler. Circulating pump: It is designed to circulate water back to the boiler after it has expelled some of its heat. Feedwater check valve or clack valve: A non-return stop valve in the feedwater line. This may be fitted to the side of the boiler, just below the water level, or to the top of the boiler. Top feed: In this design for feedwater injection, the water is fed to the top of the boiler. This can reduce boiler fatigue caused by thermal stress. By spraying the feedwater over a series of trays the water is quickly heated and this can reduce limescale. Desuperheater tubes or bundles: A series of tubes or bundles of tubes in the water drum or the steam drum designed to cool superheated steam, in order to supply auxiliary equipment that does not need, or may be damaged by, dry steam. Chemical injection line: A connection to add chemicals for controlling feedwater pH. Steam accessories Main steam stop valve: Steam traps: Main steam stop/check valve: It is used on multiple boiler installations. Combustion accessories Fuel oil system:fuel oil heaters Gas system: Coal system: Other essential items Pressure gauges: Feed pumps: Fusible plug: Insulation and lagging; Inspectors test pressure gauge attachment: Name plate: Registration plate: Draught A fuel-heated boiler must provide air to oxidize its fuel. Early boilers provided this stream of air, or draught, through the natural action of convection in a chimney connected to the exhaust of the combustion chamber. Since the heated flue gas is less dense than the ambient air surrounding the boiler, the flue gas rises in the chimney, pulling denser, fresh air into the combustion chamber. Most modern boilers depend on mechanical draught rather than natural draught. This is because natural draught is subject to outside air conditions and temperature of flue gases leaving the furnace, as well as the chimney height. All these factors make proper draught hard to attain and therefore make mechanical draught equipment much more reliable and economical. Types of draught can also be divided into induced draught, where exhaust gases are pulled out of the boiler; forced draught, where fresh air is pushed into the boiler; and balanced draught, where both effects are employed. Natural draught through the use of a chimney is a type of induced draught; mechanical draught can be induced, forced or balanced. There are two types of mechanical induced draught. The first is through use of a steam jet. The steam jet oriented in the direction of flue gas flow induces flue gases into the stack and allows for a greater flue gas velocity increasing the overall draught in the furnace. This method was common on steam driven locomotives which could not have tall chimneys. The second method is by simply using an induced draught fan (ID fan) which removes flue gases from the furnace and forces the exhaust gas up the stack. Almost all induced draught furnaces operate with a slightly negative pressure. Mechanical forced draught is provided by means of a fan forcing air into the combustion chamber. Air is often passed through an air heater; which, as the name suggests, heats the air going into the furnace in order to increase the overall efficiency of the boiler. Dampers are used to control the quantity of air admitted to the furnace. Forced draught furnaces usually have a positive pressure. Balanced draught is obtained through use of both induced and forced draught. This is more common with larger boilers where the flue gases have to travel a long distance through many boiler passes. The induced draught fan works in conjunction with the forced draught fan allowing the furnace pressure to be maintained slightly below atmospheric. See also Babcock & Wilcox, boiler manufacturer Combustion Engineering, boiler manufacturer Boiler feed water deaerator Dealkalization of water Electric water boiler (for drinking water) Electrode boiler European Conference on Industrial Furnaces and Boilers serie of conferences on furnaces and boilers Heat-only boiler station Heat pump Hot water reset Internally rifled boiler tubes (also known as Serve tubes) International Flame Research Foundation network of industrial flame experts Lancashire boiler List of boiler types Natural circulation boiler Outdoor wood-fired boiler Tube tool References Further reading American Society of Mechanical Engineers: ASME Boiler and Pressure Vessel Code, Section I. Updated every 3 years. Association of Water Technologies: Association of Water Technologies (AWT). Boilers Chemical equipment Plumbing Heating, ventilation, and air conditioning Industrial water treatment
Boiler
[ "Chemistry", "Engineering" ]
4,715
[ "Water treatment", "Chemical equipment", "Plumbing", "Industrial water treatment", "Construction", "nan", "Boilers", "Pressure vessels" ]
175,754
https://en.wikipedia.org/wiki/Beta%20Arietis
Beta Arietis (β Arietis, abbreviated Beta Ari, β Ari), officially named Sheratan , is a star system and the second-brightest star in the constellation of Aries, marking the ram's second horn. Nomenclature Beta Arietis is the star's Bayer designation. It also bears the Flamsteed designation 6 Arietis. The traditional name, Sheratan (or Sharatan, Sheratim), in full Al Sharatan, is from the Arabic الشرطان aš-šaraţān "the two signs", a reference to the star having marked the northern vernal equinox together with Gamma Arietis several thousand years ago. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Sheratan for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names. In Chinese, (), meaning Bond (asterism), refers to an asterism consisting of β Arietis, γ Arietis and α Arietis. Consequently, the Chinese name for β Arietis itself is (, ). Properties Beta Arietis has an apparent visual magnitude of 2.65. Based on dynamical parallax measurements, it is located at a distance of from Earth. This is a spectroscopic binary star system consisting of a pair of stars orbiting around each other with a separation that can not currently be resolved with a conventional telescope. However, the pair have been resolved using the Mark III Stellar Interferometer at the Mount Wilson Observatory. This allows the orbital elements to be computed, as well as the individual masses of the two stars. The stars complete their highly elliptical orbit every 107 days. The primary star has a stellar classification of A3 V, which means it is an A-type main-sequence star that is generating energy through the thermonuclear fusion of hydrogen in its core region. The NStars project gives the star a spectral type of kA4 hA5 mA5 Va under the revised MK spectral classification system. The secondary star is a G-type main-sequence star, with a stellar classification of G2V. It is about four magnitudes fainter than the primary; hence the energy output from the system is dominated by the primary star. In a few million years, as the primary evolves toward a red giant, significant amounts of mass transfer to the secondary component is expected. The primary has been classified as a rapid rotator, with a projected rotational velocity of 73 km/s providing a lower bound on the azimuthal rotational velocity along the equator. It may also be a mildly Am star, which is a class of stars that show a peculiar spectrum with strong absorption lines from various elements and deficiencies in others. In β Arietis, these absorption lines are broadened because of the Doppler effect from the rotation, making analysis of the abundance patterns difficult. This system has been examined with the Spitzer Space Telescope for the presence of an excess emission of infrared, which would indicate a disk of dust. However, no significant excess was detected. Notes References External links GJ 80 Catalog Image Beta Arietis Beta Arietis by Professor Jim Kaler. ARICNS entry The Constellations and Named Stars Aries (constellation) A-type main-sequence stars G-type main-sequence stars Spectroscopic binaries Sheratan Arietis, Beta Arietis, 06 011636 008903 0553 0080 Durchmusterung objects
Beta Arietis
[ "Astronomy" ]
754
[ "Aries (constellation)", "Constellations" ]
175,756
https://en.wikipedia.org/wiki/Exakta
The Exakta (sometimes Exacta) was a camera produced by the Ihagee Kamerawerk in Dresden, Germany, founded as the Industrie und Handels-Gesellschaft mbH, in 1912. The inspiration and design of both the VP Exakta and the Kine Exakta are the work of the Ihagee engineer Karl Nüchterlein (see Richard Hummel's Spiegelreflexkameras aus Dresden), who did not survive the Second World War. An Exakta VX was used by James Stewart's character, a professional photographer, to spy on his possibly murderous neighbor in Alfred Hitchcock's Rear Window. Characteristics Highlights of Exakta cameras include: First single-lens reflex camera (SLR) for 127 roll film (VP Exakta) came in 1933 First wind-on lever in 1934 First built-in flash socket, activated by the shutter, in 1935 First popular SLR for 35mm film came in 1936, the Kine Exakta Early Kine Exaktas had a fixed waist-level viewfinder, but later models, starting with the Exakta Varex, had an interchangeable waist- or eye-level finder. Examat and Travemat through-the-lens metering prisms were introduced in the mid 1960s. Most controls, including the shutter release and the film wind lever, are on the left-hand side, unlike many other cameras. The film is transported in the opposite direction to other 35mm SLRs. In classic Exaktas made between 1936 and 1969, two film canisters can be used, one containing unexposed film and a second into which is wound the exposed film. A sliding knife built into the bottom of the camera can be used to slice the film so that the canister containing the exposed film can be removed while preserving the unexposed film in the main canister. The knife was omitted in the Exakta VX500, one of the last "official" Exakta cameras. The shutter release on classic Exaktas is on the front of the camera, rather than the top. It is pressed with the left forefinger. Amongst others, Topcon would use this bajonet mount for a time.<ref>Collecting and Using Classic SLRs, Matanle, Ivor, 1997, Thames and Hudson, New York, </ref> This is quite similar to the Praktica design (that adapted it from Ihagee's product), the shutter-release of which was located on the right-hand side of the camera-body front. Most later lenses produced for Exaktas (Ihagee did not make their own lenses), known either as "automatic" or "semi-automatic" lenses, included a button in an extension that would align over the camera body's shutter release when the lens was mounted. The diaphragm of these lenses remained fully open, providing a bright viewfinder image, until the button was depressed halfway, when the iris would be stopped down to the shooting aperture; pressed further, the lens button engaged the camera's shutter release button, tripping the shutter. There was a full line of specialized equipment available for these system cameras that included microscope adaptor, extension bellows, stereo attachments, medical attachments and various specialized finder screens. Equipment is fully compatible between all models manufactured between 1936 and 1969. The spelling found on cameras has traditionally been Exakta, but some early Kine-Exaktas were marked Exacta specifically for marketing in France, Portugal and the U.S., perhaps for copyright reasons; and certainly a great number of American collectors refer to the whole range as the "Exacta." A related line of smaller, simpler cameras was the "Exa" line; these, too, existed in several variations. The Beseler Topcon line of 35mm cameras used the same lens mount as the Exakta. In the early 1970s, the Exakta "RTL 1000" was introduced; it accepted the older models' lenses, but had its own range of viewfinders, which included a model with through-the-lens light metering. M42 lens mount variants of the RTL line of cameras also appeared under the Praktica name. After an economic collapse following Germany's reunification, the successor of the firm (Pentacon, which subsumed Ihagee) is now back in business. This company is not related to the Dutchman Johan Steenbergen, the founder of Ihagee, or with the Exakta, which was discontinued in the 1970s. See also History of the single-lens reflex camera Ihagee Kine Exakta Praktica Zeiss Ikon References Further reading Exakta Cameras 1933–1978, Aguila, Clément and Michel Rouah, 1989, Hove Photo Books, Hove, East Sussex Collecting and Using Classic SLRs, Matanle, Ivor, 1997, Thames and Hudson, New York, Exakta Collection 1933-1987, Clément Aguila & Michel Rouah - DDP Image Edition, France, 2003. Spiegelreflexkameras aus Dresden'', Richard Hummel, Spiegelreflexkameras aus Dresden. Edition Reintzsch Leipzig, 1995, or 3-89506-127-1 External links Ihagee & Exakta Products and History Andrzej Wrotniak's site on the small-format (135 or "35mm") cameras and accessories Exakta VP Sliding Pages 35mm Exakta Sliding Pages Start SLR Soviet Exakta Copy by Stephen Rothery Marc's Classic Cameras The Official Site of the Exakta Circle, founded in 1990 Defunct photography companies of Germany German brands Photography in East Germany Products introduced in 1912 Single-lens reflex cameras
Exakta
[ "Technology" ]
1,207
[ "System cameras", "Single-lens reflex cameras" ]
175,769
https://en.wikipedia.org/wiki/Relational%20calculus
The relational calculus consists of two calculi, the tuple relational calculus and the domain relational calculus, that is part of the relational model for databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization of query optimization, which is finding more efficient manners to execute the same query in a database. The relational calculus is similar to the relational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, the relational algebra is meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting. Per Codd's theorem, the relational algebra and the domain-independent relational calculus are logically equivalent. Example A relational algebra expression might prescribe the following steps to retrieve the phone numbers and names of book stores that supply Some Sample Book: Join book stores and titles over the BookstoreID. Restrict the result of that join to tuples for the book Some Sample Book. Project the result of that restriction over StoreName and StorePhone. A relational calculus expression would formulate this query in the following descriptive or declarative manner: Get StoreName and StorePhone for book stores such that there exists a title BK with the same BookstoreID value and with a BookTitle value of Some Sample Book. Mathematical properties The relational algebra and the domain-independent relational calculus are logically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known as Codd's theorem. Purpose The raison d'être of the relational calculus is the formalization of query optimization. Query optimization consists in determining from a query the most efficient manner (or manners) to execute it. Query optimization can be formalized as translating a relational calculus expression delivering an answer A into efficient relational algebraic expressions delivering the same answer A. See also Calculus of relations References Logical calculi Relational model Database management systems
Relational calculus
[ "Mathematics" ]
435
[ "Mathematical logic", "Logical calculi" ]
175,835
https://en.wikipedia.org/wiki/Biot%20number
The Biot number (Bi) is a dimensionless quantity used in heat transfer calculations, named for the eighteenth-century French physicist Jean-Baptiste Biot (1774–1862). The Biot number is the ratio of the thermal resistance for conduction inside a body to the resistance for convection at the surface of the body. This ratio indicates whether the temperature inside a body varies significantly in space when the body is heated or cooled over time by a heat flux at its surface. In general, problems involving small Biot numbers (much smaller than 1) are analytically simple, as a result of nearly uniform temperature fields inside the body. Biot numbers of order one or greater indicate more difficult problems with nonuniform temperature fields inside the body. The Biot number appears in a number of heat transfer problems, including transient heat conduction and fin heat transfer calculations. Definition The Biot number is defined as: where: is the thermal conductivity of the body [W/(m·K)] is a convective heat transfer coefficient [W/(m2·K)] is a characteristic length [m] of the geometry considered. (The Biot number should not be confused with the Nusselt number, which employs the thermal conductivity of the fluid rather than that of the body.) The characteristic length in most relevant problems becomes the heat characteristic length, i.e. the ratio between the body volume and the heated (or cooled) surface of the body: Here, the subscript Q, for heat, is used to denote that the surface to be considered is only the portion of the total surface through which the heat passes. The physical significance of Biot number can be understood by imagining the heat flow from a small hot metal sphere suddenly immersed in a pool, to the surrounding fluid. The heat flow experiences two resistances: the first for conduction within the solid metal (which is influenced by both the size and composition of the sphere), and the second for convection at the surface of the sphere. If the thermal resistance of the fluid/sphere interface exceeds that thermal resistance offered by the interior of the metal sphere, the Biot number will be less than one. For systems where it is much less than one, the interior of the sphere may be presumed to be a uniform temperature, although this temperature may be changing with time as heat passes into the sphere from the surface. The equation to describe this change in (relatively uniform) temperature inside the object, is a simple exponential one described by Newton's law of cooling. In contrast, the metal sphere may be large, so that the characteristic length is large and the Biot number is greater than one. Now, thermal gradients within the sphere become important, even though the sphere material is a good conductor. Equivalently, if the sphere is made of a poorly conducting (thermally insulating) material, such as wood or styrofoam, the interior resistance to heat flow will exceed that of convection at the fluid/sphere boundary, even for a much smaller sphere. In this case, again, the Biot number will be greater than one. Applications The value of the Biot number can indicate the applicability (or inapplicability) of certain methods of solving transient heat transfer problems. For example, a Biot number smaller than about 0.1 implies that heat conduction inside the body offers much lower thermal resistance than the heat convection at the surface, so that temperature gradients are negligible inside of the body (such bodies are sometimes labeled "thermally thin"). In this situation, the simple lumped-capacitance model may be used to evaluate a body's transient temperature variation. The opposite is also true: a Biot number greater than about 0.1 indicates that thermal resistance within the body is not negligible, and more complex methods are need in analyzing heat transfer to or from the body (such bodies are sometimes called "thermally thick"). Heat conduction for finite Biot number When the Biot number is greater than 0.1 or so, the heat equation must be solved to determine the time-varying and spatially-nonuniform temperature field within the body. Analytic methods for handling these problems, which may exist for simple geometric shapes and uniform material thermal conductivity, are described in the article on the heat equation. Examples of verified analytic solutions along with precise numerical values are available. Often such problems are too difficult to be done except numerically, with the use of a computer model of heat transfer. Heat conduction for Bi ≪ 1 As noted, a Biot number smaller than about 0.1 shows that the conduction resistance inside a body is much smaller than heat convection at the surface, so that temperature gradients are negligible inside of the body. In this case, the lumped-capacitance model of transient heat transfer can be used. (A Biot number less than 0.1 generally indicates less than 3% error will be present when using the lumped-capacitance model.) The simplest type of lumped capacity solution, for a step change in fluid temperature, shows that a body's temperature decays exponentially in time ("Newtonian" cooling or heating) because the internal energy of the body is directly proportional to the temperature of the body, and the difference between the body temperature and the fluid temperature is linearly proportional to rate of heat transfer into or out of the body. Combining these relationships with the First law of thermodynamics leads to a simple first-order linear differential equation. The corresponding lumped capacity solution can be written in which is the thermal time constant of the body, is the mass density (kg/m3), and is specific heat capacity (J/kg-K). The study of heat transfer in micro-encapsulated phase-change slurries is an application where the Biot number is useful. For the dispersed phase of the micro-encapsulated phase-change slurry, the micro-encapsulated phase-change material itself, the Biot number is calculated to be below 0.1 and so it can be assumed that thermal gradients within the dispersed phase are negligible. Mass transfer analogue An analogous version of the Biot number (usually called the "mass transfer Biot number", or ) is also used in mass diffusion processes: where: : convective mass transfer coefficient (analogous to the h of the heat transfer problem) : mass diffusivity (analogous to the k of heat transfer problem) : characteristic length See also Convection Fourier number Heat conduction References Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Heat conduction
Biot number
[ "Physics", "Chemistry" ]
1,373
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Thermodynamics", "Heat conduction" ]
175,841
https://en.wikipedia.org/wiki/Bumper%20cars
Bumper cars or dodgems are the generic names for a type of flat amusement ride consisting of multiple small electrically powered cars which draw power from the floor or ceiling, and which are turned on and off remotely by an operator. They are also known as bumping cars, dodging cars and dashing cars. The first patent for them was filed in 1921. Design The metal floor is usually set up as a rectangular or oval track, and graphite is sprinkled on the floor to decrease friction. A rubber bumper surrounds each vehicle, and drivers either ram or dodge each other as they travel. The controls are usually an accelerator and a steering wheel. The cars can be made to go backwards by turning the steering wheel far enough in either direction, necessary in the frequent pile-ups that occur. Power source The cars are commonly powered by one of three methods. The oldest and most common method, the Over Head System (OHS), uses a conductive floor and ceiling with opposing power polarities. Contacts under the vehicle touch the floor while a pole-mounted contact shoe touches the ceiling, forming a complete circuit. A newer method, the Floor Pick-Up (FPU) system, uses alternating strips of metal across the floor separated by insulating spacers, and no ceiling grid. The strips carry the supply current, and the cars are large enough so that the vehicle covers at least two strips at all times. An array of brushes under each car makes random contact with the strips, and the voltage polarity on each contact is arranged to always provide a correct and complete circuit to operate the vehicle. A third method is used on Quantum-class cruise ships, where bumper cars run on electric batteries. This avoids the conductive floor/ceiling of the traditional bumper car setup, allowing the SeaPlex venue to be convertible from a bumper-car ride to a multipurpose gym (basketball court). The disadvantage is that these ships' bumper cars take several hours to recharge. Bumping Although the idea of the ride is to bump other cars, safety-conscious (or at least litigation-conscious) owners sometimes put up signs reading "This way around" and "No (head on) bumping". Depending on the level of enforcement by operators, these rules are often ignored by bumper car riders, especially younger children and teenagers. History In the early 1920s, a patent was granted to Max Stoehrer and his son Harold for an "Amusement Apparatus" which became the basis for their Dodgem cars. They deliberately equipped their device with "novel instrumentalities to render their manipulation and control difficult and uncertain by the occupant-operator.” They asserted that “in the hands of an unskilled operator," a "plurality of independently manipulated... cars" would “follow a promiscuous, irregular, and undefined path over the floor or other area, to not only produce various sensations during the travel of the vehicle but to collide with other cars as well as with portions of the platform provided for that purpose." During their heyday, from the late 1920s to 1950s, two major US bumper cars brands were Dodgem by the Stoehrer and the Lusse Brothers' Auto-Skooter by Joseph and Robert 'Ray' Lusse. Lusse Brothers built the first fiberglass body in 1959, in part due to the survival of Chevrolet Corvette bodies over the previous six years. After getting permission from Chevrolet, then subsequently buying the actual Corvette chevrons from local Philadelphia dealers, those were attached to the nose of their product for 1959. In the mid-1960s, Disneyland introduced hovercraft-based bumper cars called Flying Saucers, which worked on the same principle as an air hockey game; however, the ride was a mechanical failure and closed after a few years. Notable examples The largest operating bumper car floor currently operating in the United States is at Six Flags Great America in Gurnee, Illinois. Called the Rue Le Dodge (renamed Rue Le Morgue during Fright Fest in the fall), it is by or a total of . A replica of the ride was built at California's Great America in Santa Clara; in 2005, however, a concrete island was added to the middle of the floor to promote one-way traffic, reducing the floor area. Six Flags Great Adventure's Autobahn is the largest bumper car floor, but it has not operated since 2008. See also Bumper boats Collector pole Commutator (electric) Electric vehicle Go-kart Witching Waves References External links Bumping Down Memory Lane: The Lusse Legacy Vehicles by purpose Articles containing video clips Collision Electric vehicles
Bumper cars
[ "Physics" ]
939
[ "Collision", "Mechanics" ]
175,859
https://en.wikipedia.org/wiki/Plasma%20display
A plasma display panel is a type of flat-panel display that uses small cells containing plasma: ionized gas that responds to electric fields. Plasma televisions were the first large (over 32 inches/81 cm diagonal) flat-panel displays to be released to the public. Until about 2007, plasma displays were commonly used in large televisions. By 2013, they had lost nearly all market share due to competition from low-cost liquid crystal displays (LCD)s. Manufacturing of plasma displays for the United States retail market ended in 2014, and manufacturing for the Chinese market ended in 2016. Plasma displays are obsolete, having been superseded in most if not all aspects by OLED displays. Competing display technologies include cathode-ray tube (CRT), organic light-emitting diode (OLED), CRT projectors, AMLCD, Digital Light Processing DLP, SED-tv, LED display, field emission display (FED), and quantum dot display (QLED). History Early development Kálmán Tihanyi, a Hungarian engineer, described a proposed flat-panel plasma display system in a 1936 paper. The first practical plasma video display was co-invented in 1964 at the University of Illinois at Urbana–Champaign by Donald Bitzer, H. Gene Slottow, and graduate student Robert Willson for the PLATO computer system. The goal was to create a display that had inherent memory to reduce the cost of the terminals. The original neon orange monochrome Digivue display panels built by glass producer Owens-Illinois were very popular in the early 1970s because they were rugged and needed neither memory nor circuitry to refresh the images. A long period of sales decline occurred in the late 1970s because semiconductor memory made CRT displays cheaper than the $2500 USD PLATO plasma displays. Nevertheless, the plasma displays' relatively large screen size and 1 inch (25.4 mm) thickness made them suitable for high-profile placement in lobbies and stock exchanges. Burroughs Corporation, a maker of adding machines and computers, developed the Panaplex display in the early 1970s. The Panaplex display, generically referred to as a gas-discharge or gas-plasma display, uses the same technology as later plasma video displays, but began life as a seven-segment display for use in adding machines. They became popular for their bright orange luminous look and found nearly ubiquitous use throughout the late 1970s and into the 1990s in cash registers, calculators, pinball machines, aircraft avionics such as radios, navigational instruments, and stormscopes; test equipment such as frequency counters and multimeters; and generally anything that previously used nixie tube or numitron displays with a high digit-count. These displays were eventually replaced by LEDs because of their low current-draw and module-flexibility, but are still found in some applications where their high brightness is desired, such as pinball machines and avionics. 1980s In 1983, IBM introduced a orange-on-black monochrome display (Model 3290 Information Panel) which was able to show up to four simultaneous IBM 3270 terminal sessions. By the end of the decade, orange monochrome plasma displays were used in a number of high-end AC-powered portable computers, such as the Ericsson Portable PC (the first use of such a display in 1985), the Compaq Portable 386 (1987) and the IBM P75 (1990). Plasma displays had a better contrast ratio, viewability angle, and less motion blur than the LCDs that were available at the time, and were used until the introduction of active-matrix color LCD displays in 1992. Due to heavy competition from monochrome LCDs used in laptops and the high costs of plasma display technology, in 1987 IBM planned to shut down its factory in Kingston, New York, the largest plasma plant in the world, in favor of manufacturing mainframe computers, which would have left development to Japanese companies. Dr. Larry F. Weber, a University of Illinois ECE PhD (in plasma display research) and staff scientist working at CERL (home of the PLATO System), co-founded Plasmaco with Stephen Globus and IBM plant manager James Kehoe, and bought the plant from IBM for US$50,000. Weber stayed in Urbana as CTO until 1990, then moved to upstate New York to work at Plasmaco. 1990s In 1992, Fujitsu introduced the world's first full-color display. It was based on technology created at the University of Illinois at Urbana–Champaign and NHK Science & Technology Research Laboratories. In 1994, Weber demonstrated a color plasma display at an industry convention in San Jose. Panasonic Corporation began a joint development project with Plasmaco, which led in 1996 to the purchase of Plasmaco, its color AC technology, and its American factory for US$26 million. In 1995, Fujitsu introduced the first plasma display panel; it had 852×480 resolution and was progressively scanned. Two years later, Philips introduced at CES and CeBIT the first large commercially available flat-panel TV, using the Fujitsu panels. Philips had plans to sell it for 70,000 french francs. It was released as the Philips 42PW9962. It was available at four Sears locations in the US for $14,999, including in-home installation. Pioneer and Fujitsu also began selling plasma televisions that year, and other manufacturers followed. By the year 2000 prices had dropped to $10,000. 2000s In the year 2000, the first 60-inch (152-cm) plasma display was developed by Plasmaco. Panasonic was also reported to have developed a process to make plasma displays using ordinary window glass instead of the much more expensive "high strain point" glass. High strain point glass is made similarly to conventional float glass, but it is more heat resistant, deforming at higher temperatures. High strain point glass is normally necessary because plasma displays have to be baked during manufacture to dry the rare-earth phosphors after they are applied to the display. However, high strain point glass may be less scratch resistant. Until the early 2000s, plasma displays were the most popular choice for HDTV flat-panel display as they had many benefits over LCDs. Beyond plasma's deeper blacks, increased contrast, faster response time, greater color spectrum, and wider viewing angle; they were also much bigger than LCDs, and it was believed that LCDs were suited only to smaller sized televisions. Plasma had overtaken rear-projection systems in 2005. However, improvements in LCD fabrication narrowed the technological gap. The increased size, lower weight, falling prices, and often lower electrical power consumption of LCDs made them competitive with plasma television sets. In 2006, LCD prices started to fall rapidly and their screen sizes increased, although plasma televisions maintained a slight edge in picture quality and a price advantage for sets at the critical 42" size and larger. By late 2006, several vendors were offering 42" LCDs, albeit at a premium price, encroaching upon plasma's only stronghold. More decisively, LCDs offered higher resolutions and true 1080p support, while plasmas were stuck at 720p, which made up for the price difference. In late 2006, analysts noted that LCDs had overtaken plasmas, particularly in the and above segment where plasma had previously gained market share. Another industry trend was the consolidation of plasma display manufacturers, with around 50 brands available but only five manufacturers. In the first quarter of 2008, a comparison of worldwide TV sales broke down to 22.1 million for direct-view CRT, 21.1 million for LCD, 2.8 million for plasma, and 0.1 million for rear projection. When the sales figures for the 2007 Christmas season were finally tallied, analysts were surprised to find that not only had LCD outsold plasma, but CRTs as well, during the same period. This development drove competing large-screen systems from the market almost overnight. The February 2009 announcement that Pioneer Electronics was ending production of plasma screens was widely considered the tipping point in the technology's history as well. Screen sizes have increased since the introduction of plasma displays. The largest plasma video display in the world at the 2008 Consumer Electronics Show in Las Vegas, Nevada, was a unit manufactured by Matsushita Electric Industrial (Panasonic) standing tall by wide. 2010s At the 2010 Consumer Electronics Show in Las Vegas, Panasonic introduced their 152" 2160p 3D plasma. In 2010, Panasonic shipped 19.1 million plasma TV panels. In 2010, the shipments of plasma TVs reached 18.2 million units globally. Since that time, shipments of plasma TVs have declined substantially. This decline has been attributed to the competition from liquid crystal (LCD) televisions, whose prices have fallen more rapidly than those of the plasma TVs. In late 2013, Panasonic announced that they would stop producing plasma TVs from March 2014 onwards. In 2014, LG and Samsung discontinued plasma TV production as well, effectively killing the technology, probably because of lowering demand. Design A panel of a plasma display typically comprises millions of tiny compartments in between two panels of glass. These compartments, or "bulbs" or "cells", hold a mixture of noble gases and a minuscule amount of another gas (e.g., mercury vapor). Just as in the fluorescent lamps over an office desk, when a high voltage is applied across the cell, the gas in the cells forms a plasma. With flow of electricity (electrons), some of the electrons strike mercury particles as the electrons move through the plasma, momentarily increasing the energy level of the atom until the excess energy is shed. Mercury sheds the energy as ultraviolet (UV) photons. The UV photons then strike phosphor that is painted on the inside of the cell. When the UV photon strikes a phosphor molecule, it momentarily raises the energy level of an outer orbit electron in the phosphor molecule, moving the electron from a stable to an unstable state; the electron then sheds the excess energy as a photon at a lower energy level than UV light; the lower energy photons are mostly in the infrared range but about 40% are in the visible light range. Thus the input energy is converted to mostly infrared but also as visible light. The screen heats up to between during operation. Depending on the phosphors used, different colors of visible light can be achieved. Each pixel in a plasma display is made up of three cells comprising the primary colors of visible light. Varying the voltage of the signals to the cells thus allows different perceived colors. The long electrodes are stripes of electrically conducting material that also lies between the glass plates in front of and behind the cells. The "address electrodes" sit behind the cells, along the rear glass plate, and can be opaque. The transparent display electrodes are mounted in front of the cell, along the front glass plate. As can be seen in the illustration, the electrodes are covered by an insulating protective layer. A magnesium oxide layer may be present to protect the dielectric layer and to emit secondary electrons. Control circuitry charges the electrodes that cross paths at a cell, creating a voltage difference between front and back. Some of the atoms in the gas of a cell then lose electrons and become ionized, which creates an electrically conducting plasma of atoms, free electrons, and ions. The collisions of the flowing electrons in the plasma with the inert gas atoms leads to light emission; such light-emitting plasmas are known as glow discharges. In a monochrome plasma panel, the gas is mostly neon, and the color is the characteristic orange of a neon-filled lamp (or sign). Once a glow discharge has been initiated in a cell, it can be maintained by applying a low-level voltage between all the horizontal and vertical electrodes–even after the ionizing voltage is removed. To erase a cell all voltage is removed from a pair of electrodes. This type of panel has inherent memory. A small amount of nitrogen is added to the neon to increase hysteresis and thus help with the memory effect. Plasma panels may be built without nitrogen gas, using xenon, neon, argon, and helium instead with mercury being used in some early displays. In color panels, the back of each cell is coated with a phosphor. The ultraviolet photons emitted by the plasma excite these phosphors, which give off visible light with colors determined by the phosphor materials. This aspect is comparable to fluorescent lamps and to the neon signs that use colored phosphors. Every pixel is made up of three separate subpixel cells, each with different colored phosphors. One subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel has a blue light phosphor. These colors blend together to create the overall color of the pixel, the same as a triad of a shadow mask CRT or color LCD. Plasma panels use pulse-width modulation (PWM) to control brightness: by varying the pulses of current flowing through the different cells thousands of times per second, the control system can increase or decrease the intensity of each subpixel color to create billions of different combinations of red, green and blue. In this way, the control system can produce most of the visible colors. Plasma displays use the same phosphors as CRTs, which accounts for the extremely accurate color reproduction when viewing television or computer video images (which use an RGB color system designed for CRT displays). To produce light, the cells need to be driven at a relatively high voltage (~300 volts) and the pressure of the gases inside the cell needs to be low (~500 torr). Plasma displays have a wide color gamut and can be produced in fairly large sizes—up to diagonally. They had a very low luminance "dark-room" black level compared with the lighter grey of the unilluminated parts of an LCD screen. (As plasma panels are locally lit and do not require a back light, blacks are blacker on plasma and grayer on LCDs.) LED-backlit LCD televisions have been developed to reduce this distinction. The display panel itself is about thick, generally allowing the device's total thickness (including electronics) to be less than . Power consumption varies greatly with picture content, with bright scenes drawing significantly more power than darker ones – this is also true for CRTs as well as modern LCDs where LED backlight brightness is adjusted dynamically. The plasma that illuminates the screen can reach a temperature of at least . Typical power consumption is 400 watts for a screen. Most screens are set to "vivid" mode by default in the factory (which maximizes the brightness and raises the contrast so the image on the screen looks good under the extremely bright lights that are common in big box stores), which draws at least twice the power (around 500–700 watts) of a "home" setting of less extreme brightness. The lifetime of the latest generation of plasma displays is estimated at 100,000 hours (11 years) of actual display time, or 27 years at 10 hours per day. This is the estimated time over which maximum picture brightness degrades to half the original value. Plasma screens are made out of glass, which may result in glare on the screen from nearby light sources. Plasma display panels cannot be economically manufactured in screen sizes smaller than . Although a few companies have been able to make plasma enhanced-definition televisions (EDTV) this small, even fewer have made 32-inch (81-cm) plasma HDTVs. With the trend toward large-screen television technology, the 32-inch (81-cm) screen size was rapidly disappearing by mid-2009. Though considered bulky and thick compared with their LCD counterparts, some sets such as Panasonic's Z1 and Samsung's B860 series are as slim as thick making them comparable to LCDs in this respect. Plasma displays are generally heavier than LCD and may require more careful handling, such as being kept upright. Plasma displays use more electrical power, on average, than an LCD TV using a LED backlight. Older CCFL backlights for LCD panels used quite a bit more power, and older plasma TVs used quite a bit more power than recent models. Plasma displays do not work as well at high altitudes above due to pressure differential between the gases inside the screen and the air pressure at altitude. It may cause a buzzing noise. Manufacturers rate their screens to indicate the altitude parameters. For those who wish to listen to AM radio, or are amateur radio operators (hams) or shortwave listeners (SWL), the radio frequency interference (RFI) from these devices can be irritating or disabling. In their heyday, they were less expensive for the buyer per square inch than LCD, particularly when considering equivalent performance. Plasma displays have wider viewing angles than those of LCD; images do not suffer from degradation at less than straight ahead angles like LCDs. LCDs using IPS technology have the widest angles, but they do not equal the range of plasma primarily due to "IPS glow", a generally whitish haze that appears due to the nature of the IPS pixel design. Plasma displays have less visible motion blur, thanks in large part to very high refresh rates and a faster response time, contributing to superior performance when displaying content with significant amounts of rapid motion such as auto racing, hockey, baseball, etc. Plasma displays have superior uniformity to LCD panel backlights, which nearly always produce uneven brightness levels, although this is not always noticeable. High-end computer monitors have technologies to try to compensate for the uniformity problem. Contrast ratio Contrast ratio is the difference between the brightest and darkest parts of an image, measured in discrete steps, at any given moment. Generally, the higher the contrast ratio, the more realistic the image is (though the "realism" of an image depends on many factors including color accuracy, luminance linearity, and spatial linearity). Contrast ratios for plasma displays are often advertised as high as 5,000,000:1. On the surface, this is a significant advantage of plasma over most other current display technologies, a notable exception being organic light-emitting diode. Although there are no industry-wide guidelines for reporting contrast ratio, most manufacturers follow either the ANSI standard or perform a full-on-full-off test. The ANSI standard uses a checkered test pattern whereby the darkest blacks and the lightest whites are simultaneously measured, yielding the most accurate "real-world" ratings. In contrast, a full-on-full-off test measures the ratio using a pure black screen and a pure white screen, which gives higher values but does not represent a typical viewing scenario. Some displays, using many different technologies, have some "leakage" of light, through either optical or electronic means, from lit pixels to adjacent pixels so that dark pixels that are near bright ones appear less dark than they do during a full-off display. Manufacturers can further artificially improve the reported contrast ratio by increasing the contrast and brightness settings to achieve the highest test values. However, a contrast ratio generated by this method is misleading, as content would be essentially unwatchable at such settings. Each cell on a plasma display must be precharged before it is lit, otherwise the cell would not respond quickly enough. Precharging normally increases power consumption, so energy recovery mechanisms may be in place to avoid an increase in power consumption. This precharging means the cells cannot achieve a true black, whereas an LED backlit LCD panel can actually turn off parts of the backlight, in "spots" or "patches" (this technique, however, does not prevent the large accumulated passive light of adjacent lamps, and the reflection media, from returning values from within the panel). Some manufacturers have reduced the precharge and the associated background glow, to the point where black levels on modern plasmas are starting to become close to some high-end CRTs Sony and Mitsubishi produced ten years before the comparable plasma displays. With an LCD, black pixels are generated by a light polarization method; many panels are unable to completely block the underlying backlight. More recent LCD panels using LED illumination can automatically reduce the backlighting on darker scenes, though this method cannot be used in high-contrast scenes, leaving some light showing from black parts of an image with bright parts, such as (at the extreme) a solid black screen with one fine intense bright line. This is called a "halo" effect which has been minimized on newer LED-backlit LCDs with local dimming. Edgelit models cannot compete with this as the light is reflected via a light guide to distribute the light behind the panel. Plasma displays are capable of producing deeper blacks than LCD allowing for a superior contrast ratio. Earlier generation displays (circa 2006 and prior) had phosphors that lost luminosity over time, resulting in gradual decline of absolute image brightness. Newer models have advertised lifespans exceeding 100,000 hours (11 years), far longer than older CRTs. Screen burn-in Image burn-in occurs on CRTs and plasma panels when the same picture is displayed for long periods. This causes the phosphors to overheat, losing some of their luminosity and producing a "shadow" image that is visible with the power off. Burn-in is especially a problem on plasma panels because they run hotter than CRTs. Early plasma televisions were plagued by burn-in, making it impossible to use video games or anything else that displayed static images. Plasma displays also exhibit another image retention issue which is sometimes confused with screen burn-in damage. In this mode, when a group of pixels are run at high brightness (when displaying white, for example) for an extended period, a charge build-up in the pixel structure occurs and a ghost image can be seen. However, unlike burn-in, this charge build-up is transient and self-corrects after the image condition that caused the effect has been removed and a long enough period has passed (with the display either off or on). Plasma manufacturers have tried various ways of reducing burn-in such as using gray pillarboxes, pixel orbiters and image washing routines. Recent models have a pixel orbiter that moves the entire picture slower than is noticeable to the human eye, which reduces the effect of burn-in but does not prevent it. None to date have eliminated the problem and all plasma manufacturers continue to exclude burn-in from their warranties. Screen resolution Fixed-pixel displays such as plasma TVs scale the video image of each incoming signal to the native resolution of the display panel. The most common native resolutions for plasma display panels are 852×480 (EDTV), 1,366×768 and 1920×1080 (HDTV). As a result, picture quality varies depending on the performance of the video scaling processor and the upscaling and downscaling algorithms used by each display manufacturer. Early plasma televisions were enhanced-definition (ED) with a native resolution of 840×480 (discontinued) or 852×480 and down-scaled their incoming high-definition video signals to match their native display resolutions. The following ED resolutions were common prior to the introduction of HD displays, but have long been phased out in favor of HD displays, as well as because the overall pixel count in ED displays is lower than the pixel count on SD PAL displays (852×480 vs 720×576, respectively). 840×480p 852×480p Early high-definition (HD) plasma displays had a resolution of 1024x1024 and were alternate lighting of surfaces (ALiS) panels made by Fujitsu and Hitachi. These were interlaced displays, with non-square pixels. Later HDTV plasma televisions usually have a resolution of 1,024×768 found on many 42-inch (107-cm) plasma screens, 1280×768 and 1,366×768 found on 50 in, 60 in, and 65 in plasma screens, or 1920×1080 found on plasma screen sizes from 42 to 103 inches (107-262 cm). These displays are usually progressive displays, with non-square pixels, and will up-scale and de-interlace their incoming standard-definition signals to match their native display resolutions. 1024×768 resolution requires that 720p content be downscaled in one direction and upscaled in the other. Notable manufacturers Fujitsu (only produced panels) Chunghwa Picture Tubes (only produced panels ) Formosa plastics (only produced panels) Hitachi (produced panels) LG (produced panels) Panasonic Viera (produced panels) Pioneer (produced panels) Samsung (produced panels) Toshiba (produced panels) Environmental impact Plasma screens use significantly more energy than CRT and LCD screens. See also References External links ''Plasma display panels: The colorful history of an Illinois technology' ' by Jamie Hutchinson, Electrical and Computer Engineering Alumni News, Winter 2002–2003 (via archive.org) NYTimes.com – Forget L.C.D.; Go for Plasma, Says Maker of Both according to Panasonic Corporation Home Theater Geeks – 13: Plasma Geek Out (audio podcast) Display technology American inventions Hungarian inventions
Plasma display
[ "Engineering" ]
5,240
[ "Electronic engineering", "Display technology" ]
175,875
https://en.wikipedia.org/wiki/Critical%20mass
In nuclear engineering, a critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, its nuclear fission cross-section), density, shape, enrichment, purity, temperature, and surroundings. The concept is important in nuclear weapon design. Point of criticality When a nuclear chain reaction in a mass of fissile material is self-sustaining, the mass is said to be in a critical state in which there is no increase or decrease in power, temperature, or neutron population. A numerical measure of a critical mass depends on the effective neutron multiplication factor , the average number of neutrons released per fission event that go on to cause another fission event rather than being absorbed or leaving the material. A subcritical mass is a mass that does not have the ability to sustain a fission chain reaction. A population of neutrons introduced to a subcritical assembly will exponentially decrease. In this case, known as , . A critical mass is a mass of fissile material that self-sustains a fission chain reaction. In this case, known as , . A steady rate of spontaneous fission causes a proportionally steady level of neutron activity. A supercritical mass is a mass which, once fission has started, will proceed at an increasing rate. In this case, known as , . The constant of proportionality increases as increases. The material may settle into equilibrium (i.e. become critical again) at an elevated temperature/power level or destroy itself. Due to spontaneous fission a supercritical mass will undergo a chain reaction. For example, a spherical critical mass of pure uranium-235 (235U) with a mass of about would experience around 15 spontaneous fission events per second. The probability that one such event will cause a chain reaction depends on how much the mass exceeds the critical mass. If there is uranium-238 (238U) present, the rate of spontaneous fission will be much higher. Fission can also be initiated by neutrons produced by cosmic rays. Changing the point of criticality The mass where criticality occurs may be changed by modifying certain attributes such as fuel, shape, temperature, density and the installation of a neutron-reflective substance. These attributes have complex interactions and interdependencies. These examples only outline the simplest ideal cases: Varying the amount of fuel It is possible for a fuel assembly to be critical at near zero power. If the perfect quantity of fuel were added to a slightly subcritical mass to create an "exactly critical mass", fission would be self-sustaining for only one neutron generation (fuel consumption then makes the assembly subcritical again). Similarly, if the perfect quantity of fuel were added to a slightly subcritical mass, to create a barely supercritical mass, the temperature of the assembly would increase to an initial maximum (for example: 1 K above the ambient temperature) and then decrease back to the ambient temperature after a period of time, because fuel consumed during fission brings the assembly back to subcriticality once again. Changing the shape A mass may be exactly critical without being a perfect homogeneous sphere. More closely refining the shape toward a perfect sphere will make the mass supercritical. Conversely changing the shape to a less perfect sphere will decrease its reactivity and make it subcritical. Changing the temperature A mass may be exactly critical at a particular temperature. Fission and absorption cross-sections increase as the relative neutron velocity decreases. As fuel temperature increases, neutrons of a given energy appear faster and thus fission/absorption is less likely. This is not unrelated to Doppler broadening of the 238U resonances but is common to all fuels/absorbers/configurations. Neglecting the very important resonances, the total neutron cross-section of every material exhibits an inverse relationship with relative neutron velocity. Hot fuel is always less reactive than cold fuel (over/under moderation in LWR is a different topic). Thermal expansion associated with temperature increase also contributes a negative coefficient of reactivity since fuel atoms are moving farther apart. A mass that is exactly critical at room temperature would be sub-critical in an environment anywhere above room temperature due to thermal expansion alone. Varying the density of the mass The higher the density, the lower the critical mass. The density of a material at a constant temperature can be changed by varying the pressure or tension or by changing crystal structure (see allotropes of plutonium). An ideal mass will become subcritical if allowed to expand or conversely the same mass will become supercritical if compressed. Changing the temperature may also change the density; however, the effect on critical mass is then complicated by temperature effects (see "Changing the temperature") and by whether the material expands or contracts with increased temperature. Assuming the material expands with temperature (enriched uranium-235 at room temperature for example), at an exactly critical state, it will become subcritical if warmed to lower density or become supercritical if cooled to higher density. Such a material is said to have a negative temperature coefficient of reactivity to indicate that its reactivity decreases when its temperature increases. Using such a material as fuel means fission decreases as the fuel temperature increases. Use of a neutron reflector Surrounding a spherical critical mass with a neutron reflector further reduces the mass needed for criticality. A common material for a neutron reflector is beryllium metal. This reduces the number of neutrons which escape the fissile material, resulting in increased reactivity. Use of a tamper In a bomb, a dense shell of material surrounding the fissile core will contain, via inertia, the expanding fissioning material, which increases the efficiency. This is known as a tamper. A tamper also tends to act as a neutron reflector. Because a bomb relies on fast neutrons (not ones moderated by reflection with light elements, as in a reactor), the neutrons reflected by a tamper are slowed by their collisions with the tamper nuclei, and because it takes time for the reflected neutrons to return to the fissile core, they take rather longer to be absorbed by a fissile nucleus. But they do contribute to the reaction, and can decrease the critical mass by a factor of four. Also, if the tamper is (e.g. depleted) uranium, it can fission due to the high energy neutrons generated by the primary explosion. This can greatly increase yield, especially if even more neutrons are generated by fusing hydrogen isotopes, in a so-called boosted configuration. Critical size The critical size is the minimum size of a nuclear reactor core or nuclear weapon that can be made for a specific geometrical arrangement and material composition. The critical size must at least include enough fissionable material to reach critical mass. If the size of the reactor core is less than a certain minimum, too many fission neutrons escape through its surface and the chain reaction is not sustained. Critical mass of a bare sphere The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the following table. Most information on bare sphere masses is considered classified, since it is critical to nuclear weapons design, but some documents have been declassified. The critical mass for lower-grade uranium depends strongly on the grade: with 45% 235U, the bare-sphere critical mass is around ; with 19.75% 235U it is over ; and with 15% 235U, it is well over . In all of these cases, the use of a neutron reflector like beryllium can substantially drop this amount, however: with a reflector, the critical mass of 19.75%-enriched uranium drops to , and with a reflector it drops to , for example. The critical mass is inversely proportional to the square of the density. If the density is 1% more and the mass 2% less, then the volume is 3% less and the diameter 1% less. The probability for a neutron per cm travelled to hit a nucleus is proportional to the density. It follows that 1% greater density means that the distance travelled before leaving the system is 1% less. This is something that must be taken into consideration when attempting more precise estimates of critical masses of plutonium isotopes than the approximate values given above, because plutonium metal has a large number of different crystal phases which can have widely varying densities. Note that not all neutrons contribute to the chain reaction. Some escape and others undergo radiative capture. Let q denote the probability that a given neutron induces fission in a nucleus. Consider only prompt neutrons, and let ν denote the number of prompt neutrons generated in a nuclear fission. For example, ν ≈ 2.5 for uranium-235. Then, criticality occurs when ν·q = 1. The dependence of this upon geometry, mass, and density appears through the factor q. Given a total interaction cross section σ (typically measured in barns), the mean free path of a prompt neutron is where n is the nuclear number density. Most interactions are scattering events, so that a given neutron obeys a random walk until it either escapes from the medium or causes a fission reaction. So long as other loss mechanisms are not significant, then, the radius of a spherical critical mass is rather roughly given by the product of the mean free path and the square root of one plus the number of scattering events per fission event (call this s), since the net distance travelled in a random walk is proportional to the square root of the number of steps: Note again, however, that this is only a rough estimate. In terms of the total mass M, the nuclear mass m, the density ρ, and a fudge factor f which takes into account geometrical and other effects, criticality corresponds to which clearly recovers the aforementioned result that critical mass depends inversely on the square of the density. Alternatively, one may restate this more succinctly in terms of the areal density of mass, Σ: where the factor f has been rewritten as f''' to account for the fact that the two values may differ depending upon geometrical effects and how one defines Σ. For example, for a bare solid sphere of 239Pu criticality is at 320 kg/m2, regardless of density, and for 235U at 550 kg/m2. In any case, criticality then depends upon a typical neutron "seeing" an amount of nuclei around it such that the areal density of nuclei exceeds a certain threshold. This is applied in implosion-type nuclear weapons where a spherical mass of fissile material that is substantially less than a critical mass is made supercritical by very rapidly increasing ρ (and thus Σ as well) (see below). Indeed, sophisticated nuclear weapons programs can make a functional device from less material than more primitive weapons programs require. Aside from the math, there is a simple physical analog that helps explain this result. Consider diesel fumes belched from an exhaust pipe. Initially the fumes appear black, then gradually you are able to see through them without any trouble. This is not because the total scattering cross section of all the soot particles has changed, but because the soot has dispersed. If we consider a transparent cube of length L on a side, filled with soot, then the optical depth of this medium is inversely proportional to the square of L, and therefore proportional to the areal density of soot particles: we can make it easier to see through the imaginary cube just by making the cube larger. Several uncertainties contribute to the determination of a precise value for critical masses, including (1) detailed knowledge of fission cross sections, (2) calculation of geometric effects. This latter problem provided significant motivation for the development of the Monte Carlo method in computational physics by Nicholas Metropolis and Stanislaw Ulam. In fact, even for a homogeneous solid sphere, the exact calculation is by no means trivial. Finally, note that the calculation can also be performed by assuming a continuum approximation for the neutron transport. This reduces it to a diffusion problem. However, as the typical linear dimensions are not significantly larger than the mean free path, such an approximation is only marginally applicable. Finally, note that for some idealized geometries, the critical mass might formally be infinite, and other parameters are used to describe criticality. For example, consider an infinite sheet of fissionable material. For any finite thickness, this corresponds to an infinite mass. However, criticality is only achieved once the thickness of this slab exceeds a critical value. Criticality in nuclear weapon design Until detonation is desired, a nuclear weapon must be kept subcritical. In the case of a uranium gun-type bomb, this can be achieved by keeping the fuel in a number of separate pieces, each below the critical size either because they are too small or unfavorably shaped. To produce detonation, the pieces of uranium are brought together rapidly. In Little Boy, this was achieved by firing a piece of uranium (a 'doughnut') down a gun barrel onto another piece (a 'spike'). This design is referred to as a gun-type fission weapon''. A theoretical 100% pure 239Pu weapon could also be constructed as a gun-type weapon, like the Manhattan Project's proposed Thin Man design. In reality, this is impractical because even "weapons grade" 239Pu is contaminated with a small amount of 240Pu, which has a strong propensity toward spontaneous fission. Because of this, a reasonably sized gun-type weapon would suffer nuclear reaction (predetonation) before the masses of plutonium would be in a position for a full-fledged explosion to occur. Instead, the plutonium is present as a subcritical sphere (or other shape), which may or may not be hollow. Detonation is produced by exploding a shaped charge surrounding the sphere, increasing the density (and collapsing the cavity, if present) to produce a prompt critical configuration. This is known as an implosion type weapon. Prompt criticality The event of fission must release, on the average, more than one free neutron of the desired energy level in order to sustain a chain reaction, and each must find other nuclei and cause them to fission. Most of the neutrons released from a fission event come immediately from that event, but a fraction of them come later, when the fission products decay, which may be on the average from microseconds to minutes later. This is fortunate for atomic power generation, for without this delay "going critical" would be an immediately catastrophic event, as it is in a nuclear bomb where upwards of 80 generations of chain reaction occur in less than a microsecond, far too fast for a human, or even a machine, to react. Physicists recognize two points in the gradual increase of neutron flux which are significant: critical, where the chain reaction becomes self-sustaining thanks to the contributions of both kinds of neutron generation, and prompt critical, where the immediate "prompt" neutrons alone will sustain the reaction without need for the decay neutrons. Nuclear power plants operate between these two points of reactivity, while above the prompt critical point is the domain of nuclear weapons, pulsed reactors designs such as TRIGA research reactors and the pulsed nuclear thermal rocket, and some nuclear power accidents, such as the 1961 US SL-1 accident and 1986 Soviet Chernobyl disaster. See also Criticality (status) Criticality accident Nuclear criticality safety Geometric and material buckling References Mass Nuclear technology Radioactivity Nuclear weapon design Nuclear fission
Critical mass
[ "Physics", "Chemistry", "Mathematics" ]
3,195
[ "Nuclear fission", "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Size", "Nuclear technology", "Radioactivity", "Nuclear physics", "Wikipedia categories named after physical quantities", "Matter" ]
175,885
https://en.wikipedia.org/wiki/Robot%20control
Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous (a mix of fully automatic and wireless control), and fully autonomous (using artificial intelligence). Modern robots (2000-present) Medical and surgical In the medical field, robots are used to make precise movements that are difficult for humans. Robotic surgery involves the use of less-invasive surgical methods, which are “procedures performed through tiny incisions”. Robots use the da Vinci surgical method, which involves the robotic arm (which holds onto surgical instruments) and a camera. The surgeon sits on a console where he controls the robot wirelessly. The feed from the camera is projected on a monitor, allowing the surgeon to see the incisions. The system is built to mimic the movement of the surgeon’s hands and has the ability to filter slight hand tremors. But despite the visual feedback, there is no physical feedback. In other words, as the surgeon applies force on the console, the surgeon won’t be able to feel how much pressure he or she is applying to the tissue. Military The earliest robots used in the military dates back to the 19th century, where automatic weapons were on the rise due to developments in mass production. The first automated weapons were used in World War I, including radio-controlled, unmanned aerial vehicles (UAVs). Since the invention, the technology of ground and aerial robotic weapons continues to develop, it transitioned to become part of modern warfare. In the transition phase of the development, the robots were semi-automatic, being able to be controlled remotely by a human controller. The advancements made in sensors and processors lead to advancements in capabilities of military robots. Since the mid-20th century, the technology of artificial intelligence (A.I.) began to develop and in the 21st century, the technology transferred to warfare, and the weapons that were semi-automatous is developing to become lethal autonomous weapons systems, LAWS for short. Impact As the weapons are being developed to become fully autonomous, there is an ambiguous line of what is the line that separates an enemy to a civilian. There is currently a debate of whether or not artificial intelligence is able to differentiate these enemies and the question of what is morally and humanely right (for example, a child unknowingly working for the enemies). Space exploration Space missions involve sending robots into space in the goal of discovering more of the unknown. The robots used in space exploration have been controlled semi-autonomously. The robots that are sent to space have the ability to maneuver itself, and are self-sustaining. To allow for data collection and a controlled research, the robot is always in communications with scientists and engineers on Earth. For the National Aeronautics and Space Administration’s (NASA) Curiosity rover, which is part of their Mars exploration program, the communication between the rover and the operators are made possible by “an international network of antennas that…permits constant observation of spacecraft as the Earth rotates on its own axis”. Artificial intelligence Artificial intelligence (AI) is used in robotic control to make it able to process and adapt to its surroundings. It is able to be programmed to do a certain task, for instance, walk up a hill. The technology is relatively new, and is being experimented in several fields, such as the military. Boston Dynamics' robots Boston Dynamic’s “Spot” is an autonomous robot that uses four sensors and allows the robot to map where it is relative to its surroundings. The navigational method is called simultaneous localization and mapping, or “SLAM” for short. Spot has several operating modes and depending on the obstacles in front of the robot, it has the ability to override the manual mode of the robot and perform actions successfully. This is similar to other robots made by Boston Dynamics, like the “Atlas”, which also has similar methods of control. When the “Atlas” is being controlled, the control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment”. Instead of inputting data into every single joint of the robot, the engineers programmed the robot as a whole, which makes it more capable to adapt to its environment. The information in this source is dissimilar to other sources, except the second source, because robots vary so much depending on the situation. See also Synthetic Neural Modeling Control theory Cybernetics Remote-control vehicle Mobile robot navigation Robot kinematics Simultaneous localization and mapping Robot locomotion Motion planning Robot learning Vision Based Robot Control References Technology
Robot control
[ "Engineering" ]
964
[ "Robotics engineering", "Robot control" ]
175,899
https://en.wikipedia.org/wiki/Robot%20%28camera%29
Robot was a German imaging company known originally for clockwork cameras, later producing surveillance (Traffipax) and bank security cameras. Originally created in 1934 as a brand of Otto Berning, it became part of the Jenoptik group of optical companies in 1999, and specializes in traffic surveillance today. The motorized amateur cameras powered by clockwork (spring) motors were first made in 1934, and ended with a special limited edition collector's model, "Star Classic", in 1996. The Robot film cameras used 35 mm film, mostly in square 24 × 24 mm image format, but many used 18 × 24 mm (half-frame) and 24 × 36 mm (standard Leica format), and non-standard formats such as 6 × 24 mm (Recorder 6), 12 × 24 mm (Recorder 12) and 16 × 16 mm (Robot SC). Camera Models Robot I Around 1930 Heinz Kilfitt, a trained watchmaker, designed a new 35 mm film compact camera using a 24×24 mm frame format (instead of the Leica 24×36 mm or cine 18×24 mm formats). The 24×24mm square frame provided many advantages, including allowing over 50 exposures per standard roll of Leica film instead of 36. Kodak and Agfa rejected the design, and it was sold to Hans Berning, who set up the Otto Berning firm. Otto Berning was granted its first Robot patent in 1934; a US patent was granted in 1936. The camera was originally intended to come in two versions: Robot I, without motor, and Robot II with a spring motor. Its release was delayed and already the first camera "Robot I" included its hallmark spring motor. The first production cameras had a stainless steel body, a spring drive that could shoot 4 frames per second, and a rotary shutter with speeds from 1 to 1/500th second. The camera used proprietary "Type K" cartridges, not the now-standard 35 mm cartridges introduced in the same year by Kodak's Dr. August Nagel Kamerawerk for the Retina. The camera does not have a rangefinder, as it was designed for use mostly with short focal length lenses (e.g. 40 mm) with great depth of field. The Robot I was quite small, the body measuring 108 mm (4¼ inches) long, 63 mm (2½ inches) high, and 32 mm (1¼ inches) deep. A very sharp zone-focusing f/2.8, 3.25 cm Zeiss Tessar lens added 125mm (1/2 inch) to the camera depth. It was about the size of the much later Olympus Stylus although it weighed about 567 grams (20 ounces), approximately the weight of a modern SLR. The die-cast zinc and stamped stainless steel body was crammed with clockwork. A spring motor on the top plate provided the driving force for a rotary behind-the-lens shutter and a sprocket film drive. The film was loaded into cassettes in a darkroom or changing bag. The cassettes appear to be based on the Agfa Memo cassette design, the now-standard Kodak 35 mm cassette not yet being popular in Germany. In place of the velvet light trap on modern cassettes, the Robot cassette used spring pressure and felt pads to close the film passage. When the camera back was shut, the compression opened the passage and the film could travel freely from one cassette to another. The rotary shutter and the film drive are like those used in cine cameras. When the shutter release is pressed, a light-blocking shield lifts and the shutter disc rotates a full turn exposing the film through its open sector; when the pressure is released the light-blocking shield returns to its position behind the lens, and the spring motor advances the film and recocks the shutter. This is almost instantaneous. With practice a photographer could take 4 or 5 pictures a second. Each winding of the spring motor was good for about 25 pictures, half a roll of film. Shutter speed was determined by spring tension and mechanical delay since the exposure sector was fixed. The Robot I had an exposure range of 1 to 1/500, and provision for time exposures. The camera had other features not specifically related to action photography. The small optical viewfinder could be rotated 90 degrees to permit pictures to be taken in one direction while the photographer was facing in another. When the viewfinder was rotated, the scene was viewed through a deep purple filter similar to those used by cinematographers to judge the black and white contrast of an image. The camera had a built-in deep yellow filter which could be positioned behind the lens. Robot II In 1938 Berning introduced the Robot II, a slightly larger camera with some significant improvements but still using the basic mechanism. Among the standard lenses were a 3 cm Zeiss Tessar and a 3.75 cm Zeiss Tessar in f/2.8 and 3.5 variations, a f/2.0, 40 mm Zeiss Biotar and f/4, 7.5 cm Zeiss Sonnar. The film cassette system was redesigned, and the 1951 IIa accepted a standard 35 mm cassette. The special Robot cassettes type-N continued to be used for take-up. A small Bakelite box was sold to allow colour film to be rewound into the original cassettes as required by film processing companies. The camera was synchronized for flash. The swinging viewfinder was retained, but now operated by a lever rather than moving the entire housing. Both the deep purple and yellow filters were eliminated in the redesign. Some versions were available with a double-wind motor which could expose 50 frames on one winding. Civilian versions of the Robot were discontinued at the outset of the Second World War, but it was used as a bomb damage assessment camera by the Luftwaffe, mounted in the tail of Ju 87 (Stuka) dive bombers. This was an electrically driven camera using large cassettes possibly as many as 300 24 x 24mm images. Unlike the central Leica 250GG camera in the Ju 87, which was switched on automatically when the dive brakes were applied, the Robot camera had to be switched on manually. In the stress of the automatic pull out, when it was not uncommon for the pilot to black out from the g levels, switching on the bomb damage assessment camera was frequently forgotten. Robot Star and Junior In the 1950s Robot introduced the Robot Star. Film could now be rewound into the feed cassette in the camera as in other 35 mm cameras. Robot then introduced the "Junior", an economy model with the quality and almost all the features of the "Star" but without the angle finder or the rewind mechanism. In the late 1950s the company, now called Robot-Berning, redesigned the Robot Star and created the Vollautomat Star II. The length stayed the same but the height increased by 125mm (half an inch). The new higher top housing disposed of the right angle finder and instead included an Albada finder with frames for the factory-fitted 38/40 mm and 75 mm lenses. The drive and shutter too were improved. By 1960 the hallmark stamped steel body was replaced by heavier die castings. The camera became, with slight changes, the Robot Star 25 and Star 50. The Robot Star 25 could expose 25 frames on a single winding, and the double-motor Robot Star 50 could expose 50 frames. Since most Robot cameras by then were sold for industrial use where the camera was fixed in position, Robot also introduced versions without a finder, and even without rewind. Although most production dates from the 1950-1960s era, essentially the same camera continued to be manufactured into the late 1990s. Robot Royal Robot-Berning also produced enlarged versions of the Robot, the Robot Royal 18, 24 and 36, with built-in rangefinder and with an autoburst mode of operation capable of shooting 6 frames per second. The camera was about the size of a Leica M3 and weighed 907g (almost 2 pounds). It was equipped with a Schneider Xenar 45 mm f/2.8 lens. The Robot Royal 36 took a standard 35 mm still picture but was identical to the Royal 24 in all other regards. They retained the behind-the-lens rotary shutter with speeds from 1/2 to 1/500 s. Robot Royal II is a larger viewfinder camera, it has no rangefinder, it does not have burst mode, it is a stripped down Robot Royal III. Robot Royal III has a main spring, when tightened, the camera can take 4 to 5 pictures in succession. It has a built in rangefinder, eight interchangeable bayonet mount lenses. There are two versions, Robot Royal 36, produce 36 24x36mm images on a roll of 135 film, Robot Royal 24 makes 50 24x24mm images on 135 film A version for instrumentation (and traffic) was also created on the basis of the Royal design: the Recorder. These cameras were like the Royal but without viewfinder or rangefinder. They had interfaces to motors and detachable backs to support bulk film cassettes. A special parallel series of the Royal too was available that included these features. While the Royal had only limited market success, the Recorder was well accepted. It became the centerpiece of the company's portable document capture, traffic control and security solutions, and continues to be the standard Robot camera for instrumentation applications. Military and government models During the Second World War, specially adapted models were made for the German Luftwaffe. During the Cold War, Robots had a large following in the espionage business. The small camera could be concealed in a briefcase or a handbag, the lens poking though a decorative hole, and activated repeatedly by a cable release concealed in the handle. The company was well aware of this market and produced a variety of accessories which made the camera even more suitable for covert photography. Sequence photography While the Robots were capable of sequence photography, the shutter that made this possible placed some constraints upon taking lenses and shutter speeds. To reach speeds as high as 1/500 second the inertia of the thin vulcanite shutter disc had to be kept at a minimum, requiring a small-diameter disc with a minimal sector opening. The screw in lens mount was 26 mm in diameter. The clear lens opening was only 20 mm. In contrast, Leica's mount was almost twice as large at 39 mm. Further, to permit lens interchangeability, the shutter was mounted behind the lens so the disc interrupted the expanding light cone. This placed some limits on lens design. While the 75mm Sonnar could be used with the aperture set to f/22, the Tele-Xenar suffered from some shutter disc vignetting unless opened more. The maximum focal length lens for general photographic use that could be fitted with acceptable vignetting was 75 mm, although telephotos up to 600 mm were offered. A 150 mm Tele-Xenar was available for long-distance action photography, but it produced a vignetted circular image on the 24 × 24 mm frame. The lack of a rangefinder on the Robot and Robot Star required zone focusing of these long lenses: every shot had to be estimated or pre-measured. All of the mechanical movement made for a noisy camera, although not as noisy as some modern motor drives. For an extra fee, Robot-Berning supplied silenced versions with nylon gears. Within their limits the Robots did an excellent job of sequence photography. The standard 38 mm f/2.8 Xenar lenses were extremely sharp, even by today's standards, and zone focusing worked well on rapid action with short focal length lenses. The reliable motor drive was as fast, if not faster, than later electrical drives, and there were no batteries to run down. Flash could be used at any speed. The square frame was big enough, with modern films, for A4 (210 x 297 mm, or 8.25"× 11.75") or greater enlargements, and 50 pictures could be taken on a standard 36-exposure roll. The cameras, especially the later ones built to industrial standards, will take much abuse and still keep functioning. References Robot 1 External links Heinz Kilfitt - Biographical Notes - German fan site Cameras
Robot (camera)
[ "Technology" ]
2,511
[ "Recording devices", "Cameras" ]
175,924
https://en.wikipedia.org/wiki/Persecution
Persecution is the systematic mistreatment of an individual or group by another individual or group. The most common forms are religious persecution, racism, and political persecution, though there is naturally some overlap between these terms. The inflicting of suffering, harassment, imprisonment, internment, fear or pain are all factors that may establish persecution, but not all suffering will necessarily establish persecution. The threshold of severity has been a topic of much debate. International law As part of the Nuremberg Principles, crimes against humanity are part of international law. Principle VI of the Nuremberg Principles states that Telford Taylor, who was Counsel for the Prosecution at the Nuremberg Trials wrote "[at] the Nuremberg war crimes trials, the tribunals rebuffed several efforts by the prosecution to bring such 'domestic' atrocities within the scope of international law as 'crimes against humanity". Several subsequent international treaties incorporate this principle, but some have dropped the restriction "in connection with any crime against peace or any war crime" that is in Nuremberg Principles. The Rome Statute of the International Criminal Court, which is binding on 111 states, defines crimes against humanity in Article 7.1. The article criminalizes certain acts "committed as part of a widespread or systematic attack directed against any civilian population, with knowledge of the attack". These include: Religious Religious persecution is the systematic mistreatment of an individual or group due to their religious affiliation. Not only theorists of secularization (who presume a decline of religiosity in general) would willingly assume that religious persecution is a thing of the past. However, with the rise of fundamentalism and religiously related terrorism, this assumption has become even more controversial. Indeed, in many countries of the world today, religious persecution is a Human Rights problem. Atheists Atheists have experienced persecution throughout their history. Persecution may refer to unwarranted arrest, imprisonment, beating, torture, or execution. It also may refer to the confiscation or destruction of property. Baháʼís The persecution of Baháʼís refers to the religious persecution of Baháʼís in various countries, especially in Iran, which has the seventh largest Baháʼí population in the world, with just over 251,100 as of 2010. The Baháʼí Faith originated in Iran, and it represents the largest religious minority in that country. Buddhists The persecution of Buddhists has been a widespread phenomenon throughout the history of Buddhism, a phenomenon which is continuing today. As early as the 3rd century AD, Buddhists were persecuted by Kirder, the Zoroastrian high priest of the Sasanian Empire. Anti-Buddhist sentiment in Imperial China between the 5th and 10th century led to the Four Buddhist Persecutions in China of which the Great Anti-Buddhist Persecution of 845 was probably the most severe. However, Buddhism managed to survive in China, but it was greatly weakened. During the Northern Expedition, in 1926 in Guangxi, the Kuomintang Muslim General Bai Chongxi led his troops on a campaign to destroy Buddhist temples and smash idols, they turned the temples into schools and Kuomintang party headquarters. During the Kuomintang Pacification of Qinghai, the Muslim General Ma Bufang and his army wiped out many Tibetan Buddhists in the northeast and eastern Qinghai, and destroyed Tibetan Buddhist temples. The Muslim invasion of the Indian subcontinent was the first great iconoclastic invasion of the Indian subcontinent. According to William Johnston, hundreds of Buddhist monasteries and shrines were destroyed, Buddhist texts were burnt by the Muslim armies, monks and nuns were killed on the Indo-Gangetic Plain during the 12th and 13th centuries. The Buddhist university of Nalanda was mistaken for a fort because of its walled campus. The Buddhist monks who had been slaughtered were mistaken for Brahmins according to Minhaj-i-Siraj. The walled town, the Odantapuri monastery, was also destroyed by his forces. Sumpa based his account on that of Śākyaśribhadra who was at Magadha in 1200, states that the Buddhist university complexes of Odantapuri and Vikramshila were also destroyed and the monks were massacred. Muslim forces attacked the north-western regions of the Indian subcontinent many times. Many places were destroyed and renamed. For example, Odantapuri's monasteries were destroyed in 1197 by Muhammad bin Bakhtiyar Khilji and the town was renamed. Likewise, Vikramashila was destroyed by the forces of Muhammad bin Bakhtiyar Khilji around 1200. The sacred Mahabodhi Temple was almost completely destroyed by the Muslim invaders. Many Buddhist monks fled to Nepal, Tibet, and South India to avoid the consequences of war. Tibetan pilgrim Chöjepal (1179-1264), who arrived in India in 1234, had to flee advancing Muslim troops multiple times, as they were sacking Buddhist sites. In Japan, the haibutsu kishaku during the Meiji Restoration (starting in 1868) was an event which was triggered by the official policy of separation of Shinto and Buddhism (or shinbutsu bunri). This policy caused great destruction to Buddhism in Japan, the destruction of Buddhist temples, images and texts took place on a large scale all over the country and Buddhist monks were forced to return to secular life. During the 2012 Ramu violence in Bangladesh, a 25,000-strong Muslim mob set fire to at least five Buddhist temples and dozens of homes throughout the town and throughout the surrounding villages after they saw a picture of an allegedly desecrated Quran, which they claimed had been posted on Facebook by Uttam Barua, a local Buddhist man. Christians The persecution of Christians is religious persecution that Christians may be subjected to as a consequence of professing their faith, both historically and in the modern era. Early Christians were persecuted for their faith at the hands of both Jews from whose religion Christianity arose and the Roman Empire which controlled much of the land across which early Christianity was distributed. Early in the fourth century, the religion was legalized by the Edict of Milan, and it eventually became the State church of the Roman Empire. Christian missionaries, as well as the people that they converted to Christianity, have been the target of persecution, many times to the point of being martyred for their faith. There is also a history of individual Christian denominations suffering persecution at the hands of other Christians under the charge of heresy, particularly during the 16th century Protestant Reformation as well as throughout the Middle Ages when various Christian groups deemed heretical were persecuted by the Papacy. In the 20th century, Christians have been persecuted by various groups, and by atheistic states such as the USSR and North Korea. During the Second World War members of many Christian churches were persecuted in Germany for resisting the Nazi ideology. In more recent times the Christian missionary organization Open Doors (UK) estimates 100 million Christians face persecution, particularly in Muslim-dominated countries such as Pakistan and Saudi Arabia. According to the International Society for Human Rights, up to 80% of all acts of persecution are directed against people of the Christian faith. Church of Jesus Christ of Latter-day Saints (Mormonism) The Missouri extermination order forced Mormons to move to Illinois. This was after Sidney Rigdon gave his July 4th Oration which meant to state that Mormons would defend their lives and property. This speech was taken critically by the state government. Missouri state militia troops slaughtered Mormons in what is now known as the Haun's Mill massacre. Their forcible expulsion from the state caused the death of over a hundred due to exposure, starvation, and resulting illnesses. The founder of the church, Joseph Smith, was killed in Carthage, Illinois by a mob of about 200 men, almost all of whom were members of the Illinois state militia including some members of the militia who were assigned to guard him. The Mormons suffered through tarring and feathering, their lands and possessions being repeatedly taken from them, mob attacks, false imprisonments, and the US sending an army to Utah to deal with the "Mormon problem" in the Utah War which resulted in a group of Mormons led by John D. Lee massacring settlers at the Mountain Meadows Massacre. Jehovah's Witnesses Throughout the history of Jehovah's Witnesses, their beliefs, doctrines and practices have engendered controversy and opposition from local governments, communities, and mainstream Christian groups. Copts The persecution of Copts is a historical and ongoing issue in Egypt against Coptic Orthodox Christianity and its followers. It is also a prominent example of the poor status of Christians in the Middle East despite the religion being native to the region. Copts are the Christ followers in Egypt, usually Oriental Orthodox, who currently make up around 10% of the population of Egypt — the largest religious minority of that country. Copts have cited instances of persecution throughout their history and Human Rights Watch has noted "growing religious intolerance" and sectarian violence against Coptic Christians in recent years, as well as a failure by the Egyptian government to effectively investigate properly and prosecute those responsible. The Muslim conquest of Egypt took place in AD 639, during the Byzantine empire. Despite the political upheaval, Egypt remained a mainly Christian, but Copts lost their majority status after the 14th century, as a result of the intermittent persecution and the destruction of the Christian churches there, accompanied by heavy taxes for those who refused to convert. From the Muslim conquest of Egypt onwards, the Coptic Christians were persecuted by different Muslims regimes, such as the Umayyad Caliphate, Abbasid Caliphate, Fatimid Caliphate, Mamluk Sultanate, and Ottoman Empire; the persecution of Coptic Christians included closing and demolishing churches and forced conversion to Islam. Since 2011 hundreds of Egyptian Copts have been killed in sectarian clashes, and many homes, Churches and businesses have been destroyed. In just one province (Minya), 77 cases of sectarian attacks on Copts between 2011 and 2016 have been documented by the Egyptian Initiative for Personal Rights. The abduction and disappearance of Coptic Christian women and girls also remains a serious ongoing problem. Dogons For almost 1000 years, the Dogon people, an ancient tribe of Mali had faced religious and ethnic persecution—through jihads by dominant Muslim communities. These jihadic expeditions were to forced the Dogon to abandon their traditional religious beliefs for Islam. Such jihads caused the Dogon to abandon their original villages and moved up to the cliffs of Bandiagara for better defense and to escape persecution—often building their dwellings in little nooks and crannies. In the early era of French colonialism in Mali, the French authorities appointed Muslim relatives of El Hadj Umar Tall as chiefs of the Bandiagara—despite the fact that the area has been a Dogon area for centuries. In 1864, Tidiani Tall, nephew and successor of the 19th century Senegambian jihadist and Muslim leader—El Hadj Umar Tall, chose Bandiagara as the capital of the Toucouleur Empire thereby exacerbating the inter-religious and inter-ethnic conflict. In recent years, the Dogon accused the Fulanis of supporting and sheltering Islamic terrorist groups like Al-Qaeda in Dogon country, leading to the creation of the Dogon militia Dan Na Ambassagou in 2016—whose aim is to defend the Dogon from systematic attacks. That resulted in the Ogossagou massacre of Fulanis in March 2019, and a Fula retaliation with the Sobane Da massacre in June of that year. In the wake of the Ogossagou massacre, the President of Mali, Ibrahim Boubacar Keïta and his government ordered the dissolution of Dan Na Ambassagou—whom they hold partly responsible for the attacks. The Dogon militia group denied any involvement in the massacre and rejected calls to disband. Druze Historically the relationship between the Druze and Muslims has been characterized by intense persecution. The Druze faith is often classified as a branch of Isma'ili. Even though the faith originally developed out of Ismaili Islam, most Druze do not identify as Muslims, and they do not accept the five pillars of Islam. The Druze have frequently experienced persecution by different Muslim regimes such as the Shia Fatimid Caliphate, Mamluk, Sunni Ottoman Empire, and Egypt Eyalet. The persecution of the Druze included massacres, demolishing Druze prayer houses and holy places and forced conversion to Islam. Those were no ordinary killings in the Druze's narrative, they were meant to eradicate the whole community according to the Druze narrative. Most recently, the Syrian Civil War, which began in 2011, saw persecution of the Druze at the hands of Islamic extremists. Ibn Taymiyya a prominent Muslim scholar muhaddith, dismissed the Druze as non-Muslims, and his fatwa cited that Druzes: "Are not at the level of ′Ahl al-Kitāb (People of the Book) nor mushrikin (polytheists). Rather, they are from the most deviant kuffār (Infidel) ... Their women can be taken as slaves and their property can be seized ... they are to be killed whenever they are found and cursed as they described ... It is obligatory to kill their scholars and religious figures so that they do not misguide others", which in that setting would have legitimized violence against them as apostates. Ottomans have often relied on Ibn Taymiyya religious ruling to justify their persecution of Druze. Falun Gong Falun Gong was introduced to the general public by Li Hongzhi in Changchun, China, in 1992. For the next few years, Falun Gong was the fastest growing qigong practice in Chinese history and, by 1999, there were millions of practitioners. Following the seven years of widespread popularity, on July 20, 1999, the government of the People's Republic of China began a nationwide persecution campaign against Falun Gong practitioners, except in the special administrative regions of Hong Kong and Macau. In late 1999, legislation was created to outlaw "heterodox religions" and retroactively applied to Falun Gong. Amnesty International states that the persecution is "politically motivated" with "legislation being used retroactively to convict people on driven charges, and new regulations introduced to further restrict fundamental freedoms". Hindus Persecution of Hindus refers to the religious persecution inflicted upon Hindus that may undergo as a consequence of professing their faith, both historically and in the current era. Hindus have been brutally persecuted during the historical Islamic rule of the Indian subcontinent and during Portuguese rule of Goa. Even in modern times, Hindus in Pakistan and Bangladesh have suffered persecution. Most recently, thousands of Hindus from Sindh province in Pakistan have been fleeing to India voicing fear for their safety. After the Partition of India in 1947, there were 8.8 million Hindus in Pakistan (excluding Bangladesh) in 1951. In 1951, Hindus constituted 1.58% of the Pakistani population. Today, the Hindu minority amounts to 1.7 percent of Pakistan's population. The Bangladesh Liberation War (1971) resulted in one of the largest genocides of the 20th century. While estimates of the number of casualties was 3,000,000, it is reasonably certain that Hindus bore a disproportionate brunt of the Pakistan Army's onslaught against the Bengali population of what was East Pakistan. An article in Time magazine dated 2 August 1971, stated "The Hindus, who account for three-fourths of the refugees and a majority of the dead, have borne the brunt of the Muslim military hatred." Senator Edward Kennedy wrote in a report that was part of United States Senate Committee on Foreign Relations testimony dated 1 November 1971, "Hardest hit have been members of the Hindu community who have been robbed of their lands and shops, systematically slaughtered, mass rape and in some places, painted with yellow patches marked "H". All of this has been officially sanctioned, ordered and implemented under martial law from Islamabad". In the same report, Senator Kennedy reported that 80% of the refugees in India were Hindus and according to numerous international relief agencies such as UNESCO and World Health Organization the number of East Pakistani refugees at their peak in India was close to 10 million. In a syndicated column "The Pakistani Slaughter That Nixon Ignored", Pulitzer Prize–winning journalist Sydney Schanberg wrote about his return to liberated Bangladesh in 1972. "Other reminders were the yellow "H"s the Pakistanis had painted on the homes of Hindus, particular targets of the Muslim army" (by "Muslim army", meaning the Pakistan Army, which had targeted Bengali Muslims as well), (Newsday, 29 April 1994). In Bangladesh, on 28 February 2013, the International Crimes Tribunal sentenced Delwar Hossain Sayeedi, the Vice President of the Jamaat-e-Islami to death for the war crimes committed during the 1971 Bangladesh Liberation War. Following the sentence, activists of Jamaat-e-Islami and its student wing Islami Chhatra Shibir attacked the Hindus in different parts of the country. Hindu properties were looted, Hindu houses were burnt into ashes and Hindu temples were desecrated and set on fire. The violence included the looting of Hindu properties and businesses, the burning of Hindu homes, the rape of Hindu women, and the desecration and destruction of, according to community leaders, more than 50 Hindu temples; 1,500 Hindu homes were destroyed in 20 districts. While the government has held the Jamaat-e-Islami responsible for the attacks on the minorities, the Jamaat-e-Islami leadership has denied any involvement. The minority leaders have protested the attacks and appealed for justice. The Supreme Court of Bangladesh has directed the law enforcement to start suo motu investigation into the attacks. US Ambassador to Bangladesh express concern about attack of Jamaat on Bengali Hindu community. Jews The persecution of Jews is a recurring phenomenon throughout Jewish history. It has occurred on numerous occasions in widely different geographic locations. It may include pogroms, looting and the demolition of private and public Jewish property (e.g., Kristallnacht), unwarranted arrest, imprisonment, torture, killing, or even mass execution (in World War II alone, approximately six million people were deliberately killed because they were Jewish). They have been expelled from their hometowns/countries, hoping to find safe havens in other polities. In recent times anti-Semitism has often been manifested as Anti-Zionism, where Anti-Zionism is a prejudice against the Jewish movement for self-determination and the right of the Jewish people to a homeland in the State of Israel. Anti-Zionism can include threats to destroy the State of Israel (or otherwise eliminate its Jewish character), unfounded and inaccurate characterizations of Israel's power in the world, and language or actions that hold Israel to a different standard than other countries. Muslims The persecution of Muslims has been a recurring phenomenon throughout the history of Islam. Persecution may refer to unwarranted arrest, imprisonment, beatings, torture, or execution. It may also refer to the confiscation or destruction of property, or incitement to hate Muslims. Persecution can extend beyond those who perceive themselves to be Muslims and include those who are perceived by others as Muslims, or it can include Muslims who are considered non-Muslims by fellow Muslims. The Ahmadiyya regard themselves as Muslims, but are seen by many other Muslims as non-Muslims and "heretics". In 1984, the Government of Pakistan, under General Zia-ul-Haq, passed Ordinance XX, which banned proselytizing by Ahmadis and also banned Ahmadis from referring to themselves as Muslims. According to this ordinance, any Ahmadi who refers to oneself as a Muslim by words, either spoken or written, or by visible representation, directly or indirectly, or makes the call for prayer as other Muslims do, is punishable by imprisonment of up to 3 years. Because of these difficulties, Mirza Tahir Ahmad migrated to London. Pagans Persecution of Pagans refers to the historical and ongoing acts of religious intolerance, violence, and oppression against followers of pagan or polytheistic religions. This persecution has been carried out by various religious and political groups, including Christians, Muslims, and governments throughout history. The rise of Christianity as a state religion in the Late Roman Empire led to the persecution of Pagans, who were seen as a threat to the new faith and persecution of pagans have continued in Post-Roman Europe, Arabia, and North Africa. The destruction and conversion of pagan temples into churches, mosques, or other structures were common practices during the Christianization of the Roman Empire and later the Spread of Islam in Middle East and North Africa. This was done to eradicate paganism and assert the dominance of Christianity and Islam. During the Age of Discovery, Many Europeans consider aspects of Native American, African Tribes, Polynesian, and Aboriginal Australian religion as pagans, which attributed to their genocide and forced conversions. Some notable examples are the Persecution of pagans in the late Roman Empire, Christianisation of the Germanic peoples, Islamization of the Sudan region, Persecution of pagans under Theodosius I, Persecution of pagans under Constantius II, Scramble for Africa, Colonization of Australia, and Colonization of the Americas. Modern Pagans, who practice various forms of paganism, are a religious minority in every country where they exist. They have been subject to religious discrimination and/or religious persecution. The largest modern Pagan communities are in North America and the United Kingdom, and the issue of discrimination receives most attention in those locations. Although the persecution of Pagans has decreased in recent centuries, it still exists in some parts of the world. The community of Pagans and Wiccans continues to face Christian persecution, particularly in the United States, where they are frequently subjected to negative stereotypes and misconceptions, such as those perpetuated during the Satanic Panic. Philosophers Philosophers throughout the history of philosophy have been held in courts and tribunals for various offenses, often as a result of their philosophical activity, and some have even been put to death. The most famous example of a philosopher being put on trial is the case of Socrates, who was tried for, amongst other charges, corrupting the youth and impiety. Others include: Giordano Bruno - pantheist philosopher who was burned at the stake by the Roman Inquisition for his heretical religious views, his cosmological views, or both; Tommaso Campanella - confined to a convent for his heretical views, namely, an opposition to the authority of Aristotle, and later imprisoned in a castle for 27 years during which he wrote his most famous works, including The City of the Sun; Baruch Spinoza - Jewish philosopher who, at age 23, was put in cherem (similar to excommunication) by Jewish religious authorities for heresies such as his controversial ideas regarding the authenticity of the Hebrew Bible, which formed the foundations of modern biblical criticism, and the pantheistic nature of the Divine. Prior to that, he had been attacked on the steps of the community synagogue by a knife-wielding assailant shouting "Heretic!", and later his books were added to the Catholic Church's Index of Forbidden Books. Serers The persecution of the Serer people of Senegal, Gambia and Mauritania is multifaceted, and it includes both religious and ethnic elements. Religious and ethnic persecution of the Serer people dates back to the 11th century when King War Jabi usurped the throne of Tekrur (part of present-day Senegal) in 1030, and by 1035, introduced Sharia law and forced his subjects to submit to Islam. With the assistance of his son (Leb), their Almoravid allies and other African ethnic groups who have embraced Islam, the Muslim coalition army launched jihads against the Serer people of Tekrur who refused to abandon Serer religion in favour of Islam. The number of Serer deaths are unknown, but it triggered the exodus of the Serers of Tekrur to the south following their defeat, where they were granted asylum by the lamanes. Persecution of the Serer people continued from the medieval era to the 19th century, resulting in the Battle of Fandane-Thiouthioune. From the 20th to the 21st centuries, persecution of the Serers is less obvious, nevertheless, they are the object of scorn and prejudice. Sikhs The 1984 anti-Sikh riots or the 1984 Sikh Massacre was a series of pogroms directed against Sikhs in India, by anti-Sikh mobs, in response to the assassination of Indira Gandhi, on 31 October 1984, by two of her Sikh bodyguards in response to her actions authorising the military operation Operation Blue Star. There were more than 8,000 deaths, including 3,000 in Delhi. In June 1984, during Operation Blue Star, Indira Gandhi ordered the Indian Army to attack the Golden Temple and eliminate any insurgents, as it had been occupied by Sikh separatists who were stockpiling weapons. Later operations by Indian paramilitary forces were initiated to clear the separatists from the countryside of Punjab state. The Indian government reported 2,700 deaths in the ensuing chaos. In the aftermath of the riots, the Indian government reported 20,000 had fled the city, however the People's Union for Civil Liberties reported "at least" 1,000 displaced persons. The most affected regions were the Sikh neighbourhoods in Delhi. The Central Bureau of Investigation, the main Indian investigating agency, is of the opinion that the acts of violence were organized with the support from the then Delhi police officials and the central government headed by Indira Gandhi's son, Rajiv Gandhi. Rajiv Gandhi was sworn in as Prime Minister after his mother's death and, when asked about the riots, said "when a big tree falls, the earth shakes" thus trying to justify the communal strife. There are allegations that the government destroyed evidence and shielded the guilty. The Asian Age front-page story called the government actions "the Mother of all Cover-ups" There are allegations that the violence was led and often perpetrated by Indian National Congress activists and sympathisers during the riots. The chief weapon used by the mobs, kerosene, was supplied by a group of Indian National Congress Party leaders who owned filling stations. Yazidis The Persecution of Yazidis has been ongoing since at least the 10th century. The Yazidi religion is regarded as devil worship by Islamists. Yazidis have been persecuted by Muslim Kurdish tribes since the 10th century, and by the Ottoman Empire from the 17th to the 20th centuries. After the 2014 Sinjar massacre of thousands of Yazidis by the Islamic State of Iraq and the Levant, Yazidis still face violence from the Turkish Armed Forces and its ally the Syrian National Army, as well as discrimination from the Kurdistan Regional Government. According to Yazidi tradition (based on oral traditions and folk songs), estimated that 74 genocides against the Yazidis have been carried out in the past 800 years. Zoroastrians Persecution of Zoroastrians is the religious persecution inflicted upon the followers of the Zoroastrian faith. The persecution of Zoroastrians occurred throughout the religion's history. The discrimination and harassment began in the form of sparse violence and forced conversions. Muslims are recorded to have destroyed fire temples. Zoroastrians living under Muslim rule were required to pay a tax called jizya. Zoroastrian places of worship were desecrated, fire temples were destroyed and mosques were built in their place. Many libraries were burned and much of their cultural heritage was lost. Gradually an increasing number of laws were passed which regulated Zoroastrian behavior and limited their ability to participate in society. Over time, the persecution of Zoroastrians became more common and widespread, and the number of believers decreased by force significantly. Most were forced to convert due to the systematic abuse and discrimination inflicted upon them by followers of Islam. Once a Zoroastrian family was forced to convert to Islam, the children were sent to an Islamic school to learn Arabic and study the teachings of Islam, as a result some of these people lost their Zoroastrian faith. However, under the Samanids, who were Zoroastrian converts to Islam, the Persian language flourished. On occasion, the Zoroastrian clergy assisted Muslims in attacks against those whom they deemed Zoroastrian heretics. A Zoroastrian astrologer named Mulla Gushtasp predicted the fall of the Zand dynasty to the Qajar army in Kerman. Because of Gushtasp's forecast, the Zoroastrians of Kerman were spared by the conquering army of Agha Mohammad Khan Qajar. Despite the aforementioned favorable incident, the Zoroastrians during the Qajar dynasty remained in agony and their population continued to decline. Even during the rule of Agha Mohammad Khan, the founder of the dynasty, many Zoroastrians were killed and some were taken as captives to Azerbaijan. Zoroastrians regard the Qajar period as one of their worst. During the Qajar dynasty, religious persecution of the Zoroastrians was rampant. Due to the increasing contacts with influential Parsi philanthropists such as Maneckji Limji Hataria, many Zoroastrians left Iran for India. There, they formed the second major Indian Zoroastrian community known as the Iranis. Ethnic Ethnic persecution refers to perceived persecution based on ethnicity. Its meaning is parallel to that of racism, (based on race). The Rwandan genocide remains an atrocity that the indigenous Hutu and Tutsi peoples still believe is unforgivable. The Japanese occupation of China caused the death of millions of people, mostly peasants who were murdered after the Doolittle Raid in early-World War II. Assyrians Due to their Christian faith and ethnicity, the Assyrians have been persecuted since their adoption of Christianity. During the reign of Yazdegerd I, Christians in Persia were viewed with suspicion as potential Roman subversives, resulting in persecutions while at the same time, they promoted Nestorian Christianity as a buffer between the Churches of Rome and Persia. Persecutions and attempts to impose Zoroastrianism continued during the reign of Yazdegerd II. During the eras of Mongol rule under Genghis Khan and Timur, there was indiscriminate slaughter of tens of thousands of Assyrians and destruction of the Assyrian population of northwestern Iran and central and northern Iran. More recent persecutions since the 19th century include the Massacres of Badr Khan, the Massacres of Diyarbakır (1895), the Adana massacre, the Assyrian genocide, the Simele massacre, and the al-Anfal campaign. Hazara people The Hazara people of central Afghanistan have been persecuted by Afghan rulers at various times in the history. Since the tragedy of 9/11, Sunni Muslim terrorists have been attacking the Hazara community in southwestern Pakistani town of Quetta, home to some 500,000 Hazara who fled persecution in neighbouring Afghanistan. Some 2,400 men, women and children have been killed or wounded with Lashkar-e-Jhangvi claiming responsibility for most of the attacks against the community. Consequently, many thousands have fled the country seeking asylum in Australia. Roma Antiziganism is hostility, prejudice, discrimination or racism directed against the Romani people as an ethnic group, or people who are perceived as being of Romani heritage. The Porajmos was the planned and attempted effort, often described as a genocide, during World War II by the government of Nazi Germany and its allies to exterminate the Romani (Gypsy) people of Europe. Under the rule of Adolf Hitler, a supplementary decree to the Nuremberg Laws was issued on 26 November 1935, defining Gypsies as "enemies of the race-based state", the same category as Jews. Thus, the fate of Roma in Europe in some ways paralleled that of the Jews. Historians estimate that 220,000 to 500,000 Romani were killed by the Nazis and their collaborators, or more than 25% of the slightly less than 1 million Roma in Europe at the time. Ian Hancock puts the death toll as high as 1.5 million. Rohingyas The UN human rights chief slammed Myanmar's apparent "systematic attack" on the Rohingya minority, warning that "ethnic cleansing" seemed to be underway. Ethnic Rohingya Muslims who fled from security forces in Myanmar's Rakhine State have described killings, shelling, and arson in their villages that have all the hallmarks of a campaign of “ethnic cleansing,” Human Rights Watch said. “Rohingya refugees have harrowing accounts of fleeing Burmese army attacks and watching their villages be destroyed,” said Meenakshi Ganguly, South Asia director. “Lawful operations against armed groups do not involve burning the local population out of their homes.” Sri Lankan Tamils Widespread attacks on Sri Lankan Tamils came in the form of island wide ethnic riots, including The 1958 anti-Tamil pogrom and the Black July riots. Further persecution through murders, targeted rape and kidnapping occurred. Whilst previously, the majority of Tamils demanded instead for a separate state, by 1983 armed struggles against Sinhalese extremists began to rise, culminating in the formation of the Liberation Tigers of Tamil Eelam. Uyghurs Uyghurs and other Turkic peoples in modern-day Xinjiang (called East Turkestan by independence activists) declared two short-lived independent East Turkestan Republics in the 20th century. In late 1949, the region and the rest of China came under the control of the People's Republic of China. Uyghur activist groups have said that anger towards the Chinese government has been fueled by years of state-sponsored oppression and discrimination. In 2017, the China began a large-scale crackdown on the Xinjiang region, which it justifies as a counterterrorism campaign following sporadic terrorist attacks in Xinjiang. Scholars estimate that the Chinese government detained over one million Uyghurs in internment camps (also called re-education camps) in order to indoctrinate them away from religion and Sinicize them (assimilate them into Chinese culture). Critics of the policy have described it as the Sinicization of Xinjiang and they have also called it an ethnocide or a cultural genocide, while some governments, activists, independent NGOs, human rights organizations, academics, government officials, and the East Turkistan Government-in-Exile have called it a genocide. Based on genetics People with albinism Persecution on the basis of albinism is frequently based on the belief that albinos are inferior to persons with higher concentration of melanin in their skin. As a result, albinos have been persecuted, killed and dismembered, and graves of albinistic people dug up and desecrated. Such people have also been ostracized and even killed because they are presumed to bring bad luck in some areas. Haiti also has a long history of treating albinistic people as accursed, with the highest incidence under the influence of François "Papa Doc" Duvalier. People with autism People with autism spectrum disorders have commonly been victims of persecution, both throughout history and in the present era. In Cameroon children with autism are commonly accused of witchcraft and singled out for torture and even death. Additionally, it is speculated that many of the disabled children murdered during Action T4 in Nazi Germany may have been autistic, making autistic people among the first victims of The Holocaust. LGBT A number of countries, especially those countries in the Western world, have passed measures to alleviate discrimination against sexual minorities, including laws against anti-gay hate crimes and workplace discrimination. Some countries have also legalized same-sex marriages or civil unions in order to grant same-sex couples the same protections and benefits as those which are granted to opposite-sex couples. In 2011, the United Nations passed its first resolution which recognizes LGBT rights and, in 2015, same-sex marriages were legalized in all states of the United States. See also Persecution of Christians Christian martyrs Defamation Discrimination Latter-day Saint martyrs Lawfare Oppression Persecutory delusion Right to asylum Social defeat Social exclusion Notes References Sources External links Language alternatives to creating and being persecutors Abuse Crimes against humanity by type Majority–minority relations Racism
Persecution
[ "Biology" ]
7,460
[ "Abuse", "Behavior", "Aggression", "Human behavior" ]
175,959
https://en.wikipedia.org/wiki/Mains%20electricity
Mains electricity or utility power, grid power, domestic power, and wall power, or, in some parts of Canada, hydro, is a general-purpose alternating-current (AC) electric power supply. It is the form of electrical power that is delivered to homes and businesses through the electrical grid in many parts of the world. People use this electricity to power everyday items (such as domestic appliances, televisions and lamps) by plugging them into a wall outlet. The voltage and frequency of electric power differs between regions. In much of the world, a voltage (nominally) of 230 volts and frequency of 50 Hz is used. In North America, the most common combination is 120 V and a frequency of 60 Hz. Other combinations exist, for example, 230 V at 60 Hz. Travellers' portable appliances may be inoperative or damaged by foreign electrical supplies. Non-interchangeable plugs and sockets in different regions provide some protection from accidental use of appliances with incompatible voltage and frequency requirements. Terminology In the US, mains electric power is referred to by several names including "utility power", "household power", "household electricity", "house current", "powerline", "domestic power", "wall power", "line power", "wall current", "AC power", "city power", "street power", and "120 (one twenty)". In the UK, mains electric power is generally referred to as "the mains". More than half of power in Canada is hydroelectricity, and mains electricity is often referred to as "hydro" in some regions of the country. This is also reflected in names of current and historical electricity utilities such as Hydro-Québec, BC Hydro, Manitoba Hydro, Hydro One (Ontario), and Newfoundland and Labrador Hydro. Power systems Worldwide, many different mains power systems are found for the operation of household and light commercial electrical appliances and lighting. The different systems are primarily characterized by: Voltage Frequency Plugs and sockets (receptacles or outlets) Earthing system (grounding) Protection against overcurrent damage (e.g., due to short circuit), electric shock, and fire hazards Parameter tolerances. All these parameters vary among regions. The voltages are generally in the range 100–240 V (always expressed as root-mean-square voltage). The two commonly used frequencies are 50 Hz and 60 Hz. Single-phase or three-phase power is most commonly used today, although two-phase systems were used early in the 20th century. Foreign enclaves, such as large industrial plants or overseas military bases, may have a different standard voltage or frequency from the surrounding areas. Some city areas may use standards different from that of the surrounding countryside (e.g. in Libya). Regions in an effective state of anarchy may have no central electrical authority, with electric power provided by incompatible private sources. Many other combinations of voltage and utility frequency were formerly used, with frequencies between 25 Hz and 133 Hz and voltages from 100 V to 250 V. Direct current (DC) has been displaced by alternating current (AC) in public power systems, but DC was used especially in some city areas to the end of the 20th century. The modern combinations of 230 V/50 Hz and 120 V/60 Hz, listed in IEC 60038, did not apply in the first few decades of the 20th century and are still not universal. Industrial plants with three-phase power will have different, higher voltages installed for large equipment (and different sockets and plugs), but the common voltages listed here would still be found for lighting and portable equipment. Common uses of electricity Electricity is used for lighting, heating, cooling, electric motors and electronic equipment. The US Energy Information Administration (EIA) has published: U.S. residential sector electricity consumption by major end uses in 2021 1 Includes televisions, set-top boxes, home theatre systems, DVD players, and video game consoles. 2 Includes desktop and laptop computers, monitors, and networking equipment. 3 Does not include water heating. 4 Includes small electric devices, heating elements, exterior lights, outdoor grills, pool and spa heaters, backup electricity generators, and motors not listed above. Does not include electric vehicle charging. Electronic appliances such as computers or televisions sets typically use an AC to DC converter or AC adapter to power the device. This is often capable of operation with a wide range of voltage and with both common power frequencies. Other AC applications usually have much more restricted input ranges. Building wiring Portable appliances use single-phase electric power, with two or three wired contacts at each outlet. Two wires (neutral and live/active/hot) carry current to operate the device. A third wire, not always present, connects conductive parts of the appliance case to earth ground. This protects users from electric shock if live internal parts accidentally contact the case. In northern and central Europe, residential electrical supply is commonly 400 V three-phase electric power, which gives 230 V between any single phase and neutral; house wiring may be a mix of three-phase and single-phase circuits, but three-phase residential use is rare in the UK. High-power appliances such as kitchen stoves, water heaters and household power heavy tools like log splitters may be supplied from the 400 V three-phase power supply. Small portable electrical equipment is connected to the power supply through flexible cables terminated in a plug, which is inserted into a fixed receptacle (socket). Larger household electrical equipment and industrial equipment may be permanently wired to the fixed wiring of the building. For example, in North American homes a window-mounted self-contained air conditioner unit would be connected to a wall plug, whereas the central air conditioning for a whole home would be permanently wired. Larger plug and socket combinations are used for industrial equipment carrying larger currents, higher voltages, or three phase electric power. Circuit breakers and fuses are used to detect short circuits between the line and neutral or ground wires or the drawing of more current than the wires are rated to handle (overload protection) to prevent overheating and possible fire. These protective devices are usually mounted in a central panel—most commonly a distribution board or consumer unit—in a building, but some wiring systems also provide a protection device at the socket or within the plug. Residual-current devices, also known as ground-fault circuit interrupters and appliance leakage current interrupters, are used to detect ground faults—flow of current in other than the neutral and line wires (like the ground wire or a person). When a ground fault is detected, the device quickly cuts off the circuit. Voltage levels Most of the world population (Europe, Africa, Asia, Australia, New Zealand, and much of South America) use a supply that is within 6% of 230 V. In the United Kingdom the nominal supply voltage is 230 V +10%/−6% to accommodate the fact that most transformers are in fact still set to 240 V. The 230 V standard has become widespread so that 230 V equipment can be used in most parts of the world with the aid of an adapter or a change to the equipment's plug to the standard for the specific country. The United States and Canada use a supply voltage of 120 volts ± 6%. Japan, Taiwan, Saudi Arabia, North America, Central America and some parts of northern South America use a voltage between 100 V and 127 V. However, most of the households in Japan equip split-phase electric power like the United States, which can supply 200 V by using reversed phase at the same time. Brazil is unusual in having both 127 V and 220 V systems at 60 Hz and also permitting interchangeable plugs and sockets. Saudi Arabia and Mexico have mixed voltage systems; in residential and light commercial buildings both countries use 127 volts, with 220 volts at 60 Hz in commercial and industrial applications. The Saudi government approved plans in August 2010 to transition the country to a totally 230/400-volt 60 Hz system. Measuring voltage A distinction should be made between the voltage at the point of supply (nominal voltage at the point of interconnection between the electrical utility and the user) and the voltage rating of the equipment (utilization or load voltage). Typically the utilization voltage is 3% to 5% lower than the nominal system voltage; for example, a nominal 208 V supply system will be connected to motors with "200 V" on their nameplates. This allows for the voltage drop between equipment and supply. Voltages in this article are the nominal supply voltages and equipment used on these systems will carry slightly lower nameplate voltages. Power distribution system voltage is nearly sinusoidal in nature. Voltages are expressed as root mean square (RMS) voltage. Voltage tolerances are for steady-state operation. Momentary heavy loads, or switching operations in the power distribution network, may cause short-term deviations out of the tolerance band and storms and other unusual conditions may cause even larger transient variations. In general, power supplies derived from large networks with many sources are more stable than those supplied to an isolated community with perhaps only a single generator. Choice of voltage The choice of supply voltage is due more to historical reasons than optimization of the electric power distribution system—once a voltage is in use and equipment using this voltage is widespread, changing voltage is a drastic and expensive measure. A 230 V distribution system will use less conductor material than a 120 V system to deliver a given amount of power because the current, and consequently the resistive loss, is lower. While large heating appliances can use smaller conductors at 230 V for the same output rating, few household appliances use anything like the full capacity of the outlet to which they are connected. Minimum wire size for hand-held or portable equipment is usually restricted by the mechanical strength of the conductors. Many areas, such as the US, which use (nominally) 120 V, make use of three-wire, split-phase 240 V systems to supply large appliances. In this system a 240 V supply has a centre-tapped neutral to give two 120 V supplies which can also supply 240 V to loads connected between the two line wires. Three-phase systems can be connected to give various combinations of voltage, suitable for use by different classes of equipment. Where both single-phase and three-phase loads are served by an electrical system, the system may be labelled with both voltages such as 120/208 or 230/400 V, to show the line-to-neutral voltage and the line-to-line voltage. Large loads are connected for the higher voltage. Other three-phase voltages, up to 830 volts, are occasionally used for special-purpose systems such as oil well pumps. Large industrial motors (say, more than 250 hp or 150 kW) may operate on medium voltage. On 60 Hz systems a standard for medium voltage equipment is 2,400/4,160 V whereas 3,300 V is the common standard for 50 Hz systems. Standardization Until 1987, mains voltage in large parts of Europe, including Germany, Austria and Switzerland, was while the UK used . Standard ISO IEC 60038:1983 defined the new standard European voltage to be . From 1987 onwards, a step-wise shift towards was implemented. From 2009 on, the voltage is permitted to be . No change in voltage was required by either the Central European or the UK system, as both 220 V and 240 V fall within the lower 230 V tolerance bands (230 V ±6%). Usually the voltage of 230 V ±3% is maintained. Some areas of the UK still have 250 volts for legacy reasons, but these also fall within the 10% tolerance band of 230 volts. In practice, this allowed countries to have supplied the same voltage (220 or 240 V), at least until existing supply transformers are replaced. Equipment (with the exception of filament bulbs) used in these countries is designed to accept any voltage within the specified range. In 2000, Australia converted to 230 V as the nominal standard with a tolerance of +10%/−6%, this superseding the old 240 V standard, AS 2926-1987. The tolerance was increased in 2022 to ± 10% with the release of AS IEC 60038:2022. The utilization voltage available at an appliance may be below this range, due to voltage drops within the customer installation. As in the UK, 240 V is within the allowable limits and "240 volt" is a synonym for mains in Australian and British English. In the United States and Canada, national standards specify that the nominal voltage at the source should be 120 V and allow a range of 114 V to 126 V (RMS) (−5% to +5%). Historically, 110 V, 115 V and 117 V have been used at different times and places in North America. Mains power is sometimes spoken of as 110 V; however, 120 V is the nominal voltage. In Japan, the electrical power supply to households is at 100 and 200 V. Eastern and northern parts of Honshū (including Tokyo) and Hokkaidō have a frequency of 50 Hz, whereas western Honshū (including Nagoya, Osaka, and Hiroshima), Shikoku, Kyūshū and Okinawa operate at 60 Hz. The boundary between the two regions contains four back-to-back high-voltage direct-current (HVDC) substations which interconnect the power between the two grid systems; these are Shin Shinano, Sakuma Dam, Minami-Fukumitsu, and the Higashi-Shimizu Frequency Converter. To accommodate the difference, frequency-sensitive appliances marketed in Japan can often be switched between the two frequencies. History The world's first public electricity supply was a water wheel driven system constructed in the small English town of Godalming in 1881. It was an alternating current (AC) system using a Siemens alternator supplying power for both street lights and consumers at two voltages, 250 V for arc lamps, and 40 V for incandescent lamps. The world's first large scale central plant—Thomas Edison's steam powered station at Holborn Viaduct in London—started operation in January 1882, providing direct current (DC) at 110 V. The Holborn Viaduct station was used as a proof of concept for the construction of the much larger Pearl Street Station in New York, the world's first permanent commercial central power plant. The Pearl Street Station also provided DC at 110 V, considered to be a "safe" voltage for consumers, beginning 4 September 1882. AC systems started appearing in the US in the mid-1880s, using higher distribution voltage stepped down via transformers to the same 110 V customer utilization voltage that Edison used. In 1883, Edison patented a three–wire distribution system to allow DC generation plants to serve a wider radius of customers to save on copper costs. By connecting two groups of 110 V lamps in series more load could be served by the same size conductors run with 220 V between them; a neutral conductor carried any imbalance of current between the two sub-circuits. AC circuits adopted the same form during the war of the currents, allowing lamps to be run at around 110 V and major appliances to be connected to 220 V. Nominal voltages gradually crept upward to 112 V and 115 V, or even 117 V. After World War II the standard voltage in the U.S. became 117 V, but many areas lagged behind even into the 1960s. In 1954, the American National Standards Institute (ANSI) published C84.1 "American National Standard for Electric Power Systems and Equipment – Voltage Ratings (60 Hertz)". This standard established 120 volt nominal system and two ranges for service voltage and utilization voltage variations. Today, virtually all American homes and businesses have access to 120 and 240 V at 60 Hz. Both voltages are available on the three wires (two "hot" legs of opposite phase and one "neutral" leg). In 1899, the Berliner Elektrizitäts-Werke (BEW), a Berlin electrical utility, decided to greatly increase its distribution capacity by switching to 220 V nominal distribution, taking advantage of the higher voltage capability of newly developed metal filament lamps. The company was able to offset the cost of converting the customer's equipment by the resulting saving in distribution conductors cost. This became the model for electrical distribution in Germany and the rest of Europe and the 220 V system became common. North American practice remained with voltages near 110 V for lamps. In the first decade after the introduction of alternating current in the US (from the early 1880s to about 1893) a variety of different frequencies were used, with each electric provider setting their own, so that no single one prevailed. The most common frequency was  Hz. The rotation speed of induction generators and motors, the efficiency of transformers, and flickering of carbon arc lamps all played a role in frequency setting. Around 1893 the Westinghouse Electric Company in the United States and AEG in Germany decided to standardize their generation equipment on 60 Hz and 50 Hz respectively, eventually leading to most of the world being supplied at one of these two frequencies. Today most 60 Hz systems deliver nominal 120/240 V, and most 50 Hz nominally 230 V. The significant exceptions are in Brazil, which has a synchronized 60 Hz grid with both 127 V and 220 V as standard voltages in different regions, and Japan, which has two frequencies: 50 Hz for East Japan and 60 Hz for West Japan. Voltage regulation To maintain the voltage at the customer's service within the acceptable range, electrical distribution utilities use regulating equipment at electrical substations or along the distribution line. At a substation, the step-down transformer will have an automatic on-load tap changer, allowing the ratio between transmission voltage and distribution voltage to be adjusted in steps. For long (several kilometres) rural distribution circuits, automatic voltage regulators may be mounted on poles of the distribution line. These are autotransformers, again, with on-load tap changers to adjust the ratio depending on the observed voltage changes. At each customer's service, the step-down transformer has up to five taps to allow some range of adjustment, usually ±5% of the nominal voltage. Since these taps are not automatically controlled, they are used only to adjust the long-term average voltage at the service and not to regulate the voltage seen by the utility customer. Power quality The stability of the voltage and frequency supplied to customers varies among countries and regions. "Power quality" is a term describing the degree of deviation from the nominal supply voltage and frequency. Short-term surges and drop-outs affect sensitive electronic equipment such as computers and flat-panel displays. Longer-term power outages, brownouts and blackouts and low reliability of supply generally increase costs to customers, who may have to invest in uninterruptible power supply or stand-by generator sets to provide power when the utility supply is unavailable or unusable. Erratic power supply may be a severe economic handicap to businesses and public services which rely on electrical machinery, illumination, climate control and computers. Even the best quality power system may have breakdowns or require servicing. As such, companies, governments and other organizations sometimes have backup generators at sensitive facilities, to ensure that power will be available even in the event of a power outage or black out. Power quality can also be affected by distortions of the current or voltage waveform in the form of harmonics of the fundamental (supply) frequency, or non-harmonic (inter)modulation distortion such as that caused by electromagnetic interference. In contrast, harmonic distortion is usually caused by conditions of the load or generator. In multi-phase power, phase shift distortions caused by imbalanced loads can occur. See also Electricity meter Mains electricity by country Maximum demand indicator References Electric power
Mains electricity
[ "Physics", "Engineering" ]
4,069
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
175,974
https://en.wikipedia.org/wiki/Octatonic%20scale
An octatonic scale is any eight-note musical scale. However, the term most often refers to the ancohemitonic symmetric scale composed of alternating whole and half steps, as shown at right. In classical theory (in contrast to jazz theory), this symmetrical scale is commonly called the octatonic scale (or the octatonic collection), although there are a total of 43 enharmonically inequivalent, transpositionally inequivalent eight-note sets. The earliest systematic treatment of the octatonic scale was in Edmond de Polignac's unpublished treatise "Étude sur les successions alternantes de tons et demi-tons (Et sur la gamme dite majeure-mineure)" (Study of the Succession of Alternating Whole Tones and Semitones (and of the so-called Major-Minor Scale)) from c. 1879, which preceded Vito Frazzi's Scale alternate per pianoforte of 1930 by 50 years. Nomenclature In Saint Petersburg at the turn of the 20th century, this scale had become so familiar in the circle of composers around Nikolai Rimsky-Korsakov that it was referred to as the Korsakovian scale (Корсаковская гамма). As early as 1911, the Russian theorist Boleslav Yavorsky described this collection of pitches as the diminished mode (уменьшённый лад), because of the stable way the diminished fifth functions in it. In more recent Russian theory, the term octatonic is not used. Instead, this scale is placed among other symmetrical modes (total 11) under its historical name Rimsky-Korsakov scale, or Rimsky-Korsakov mode.) In jazz theory, it is called the diminished scale or symmetric diminished scale because it can be conceived as a combination of two interlocking diminished seventh chords, just as the augmented scale can be conceived as a combination of two interlocking augmented triads. The two modes are sometimes referred to as the half-step/whole step diminished scale and the whole step/half-step diminished scale. Because it was associated in the early 20th century with the Dutch composer Willem Pijper, in the Netherlands it is called the Pijper scale. Construction The twelve tones of the chromatic scale are covered by three disjoint diminished seventh chords. The notes from two such seventh-chords combination form an octatonic collection. Because there are three ways to select two from three, there are three octatonic scales in the twelve-tone system. Each octatonic scale has exactly two modes: the first begins its ascent with a whole step, while the second begins its ascent with a half step (semitone). These modes are sometimes referred to as the whole step/half-step diminished scale and the half-step/whole step diminished scale, respectively. Each of the three distinct scales can form differently named scales with the same sequence of tones by starting at a different point in the scale. With alternative starting points listed below in square brackets, and return to tonic in parentheses, the three are, ascending by semitones: C D [E] F [G] A [A] B (C) D E [F] G [G] A [B] C (D) E F [G] A [A] B [C] D (E) It may also be represented as semitones, either starting with a whole tone (as above): , or starting with a semitone: , or labeled as set class 8‑28. With one more scale tone than described by the western diatonic scale, it is not possible to perfectly notate music of the octatonic scale using any conventional key signature without the use of accidentals. Across all conventional key signatures, at least two of the octatonic notes must share similar horizontal alignment on the staves, although the precise combination of accidentals and naturals varies. There are usually several equally succinct combinations of key signature and accidentals, and different composers have chosen to notate their music differently, sometimes ignoring the niceties of notation conventions designed to facilitate diatonic tonality. Properties Symmetry The three octatonic collections are transpositionally and inversionally symmetric—that is, they are related by a variety of transposition and inversion operations: They are each closed under transpositions by 3, 6, or 9 semitones. A transposition by 1, 4, 7, or 10 semitones will transform the E scale into the D scale, the C scale into the D scale, and the D scale into the E scale. Conversely, transpositions by 2, 5, 8, or 11 semitones acts in the reverse way; the E scale goes to the D scale, D to C and C to E. Thus, the set of transpositions acts on the set of diminished collections as the integers modulo 3. If the transposition is congruent to 0 mod 3 the pitch collection is unchanged and the transpositions by 1 semitone or by 2 semitones are inverse to one another. The E and C collections can be swapped by inversions around E, F, A or C (the tones common to both scales). Similarly, the C and D collections can be swapped by inversions around E, G, B/A, D/C and the D and E collections by inversions around D, F, A, or B. All other transformations do not change the classes (e.g. reflecting the E collection around E gives the E collection once again). This unfortunately means that the inversions do not act as a simple cyclic group on the set of diminished scales. Subsets Among the collection's remarkable features is that it is the only collection that can be disassembled into four transpositionally related pitch pairs in six different ways, each of which features a different interval class. For example: semitone: (C, C), (D, E) (F, G), (A, B) whole step: (C, D), (E, F), (G, A), (B, C) minor third: (C, E), (F, A), (C, E), (G, B) major third: (C, E), (F, B), (E, G), (A, C) perfect fourth: (C, F), (B, E), (G, C), (E, A) tritone: (C, F), (E, A), (C, G), (E, B) Another remarkable feature of the diminished scale is that it contains the first four notes of four different minor scales separated by minor thirds. For example: C, D, E, F and (enharmonically) F, G, A, B. Also E, F, G, A, and A, B, C, D. The scale "allows familiar harmonic and linear configurations such as triads and modal tetrachords to be juxtaposed unusually but within a rational framework" though the relation of the diatonic scale to the melodic and harmonic surface is thus generally oblique. History Early examples Joseph Schillinger suggests that the scale was formulated already by Persian traditional music in the 7th century AD, where it was called "Zar ef Kend", meaning "string of pearls", the idea being that the two different sizes of intervals were like two different sizes of pearls. Octatonic scales first occurred in Western music as byproducts of a series of minor-third transpositions. While Nikolai Rimsky-Korsakov claimed he was conscious of the octatonic collection "as a cohesive frame of reference" in his autobiography My Musical Life, instances can be found in music of previous centuries. Eytan Agmon locates one in Domenico Scarlatti's Sonata K. 319. In the following passage, according to Richard Taruskin, "its descending whole-step/half-step bass progression is complete and continuous". Taruskin also cites the following bars from J. S. Bach's English Suite No. 3 as octatonic: Honoré Langlé's 1797 harmony treatise contains a sequential progression with a descending octatonic bass, supporting harmonies that use all and only the notes of an octatonic scale. 19th century In 1800, Beethoven composed his Piano Sonata No. 11 in B, Op. 22. The slow movement of this work contains a passage of what was, for its time, highly dissonant harmony. In a lecture (2005), pianist András Schiff describes the harmony of this passage as "really extraordinary". The chord progressions at the beginning of the second and third bars of this passage are octatonic: Octatonic scales can be found in Chopin's Mazurka, Op. 50, No. 3 and in several Liszt piano works: the closing measures of the third Étude de Concert, "Un Sospiro," for example, where (mm. 66–70) the bass contains a complete falling octatonic scale from D-flat to D-flat, in the opening piano cadenzas of Totentanz, in the lower notes between the alternating hands, and in the First Mephisto Waltz, in which a short cadenza (m. 525) makes use of it by harmonizing it with a B-flat Diminished Seventh chord. Later in the 19th century, the notes in the chords of the coronation bells from the opening scene of Modest Mussorgsky's opera Boris Godunov, which consist of "two dominant seventh chords with roots a tritone apart" according to Taruskin, are entirely derived from an octatonic scale. Taruskin continues: "Thanks to the reinforcement the lesson has received in some equally famous pieces like Scheherazade, the progression is often thought of as being peculiarly Russian." Tchaikovsky was also influenced by the harmonic and coloristic potential of octatonicism. As Mark DeVoto points out, the cascading arpeggios played on the celesta in the "Sugar Plum Fairy" from The Nutcracker ballet are made up of dominant seventh chords a minor third apart. "Hagens Watch", one of the darkest and most sinister scenes in Richard Wagner's opera Götterdämmerung features chromatic harmonies using eleven of the twelve chromatic notes, within which the eight notes of the octatonic scale may be found in bars 9–10 below: Late 19th and 20th century The scale is also found in the music of Claude Debussy and Maurice Ravel. Melodic phrases that move by alternating tones and semitones frequently appear in the works of both these composers. Allen Forte identifies a five-note segment in the cor anglais melody heard near the start of Debussy's "Nuages" from his orchestral suite Nocturnes as octatonic. Mark DeVoto describes "Nuages" as "arguably [Debussy's] boldest single leap into the musical unknown. 'Nuages' defines a kind of tonality never heard before, based on the centricity of a diminished tonic triad (B-D-F natural)." According to Stephen Walsh, the cor anglais theme "hangs in the texture like some motionless object, always the same and always at the same pitch". There is a particularly striking and effective use of the octatonic scale in the opening bars of Liszt's late piece Bagatelle sans tonalité from 1885. The scale was extensively used by Rimsky-Korsakov's student Igor Stravinsky, particularly in his Russian-period works such as Petrushka (1911), The Rite of Spring (1913), up to the Symphonies of Wind Instruments (1920). Passages using this scale are unmistakable as early as the Scherzo fantastique, Fireworks (both from 1908), and The Firebird (1910). It also appears in later works by Stravinsky, such as the Symphony of Psalms (1930), the Symphony in Three Movements (1945), most of the neoclassical works from the Octet (1923) to Agon (1957), and even in some of the later serial compositions such as the Canticum Sacrum (1955) and Threni (1958). In fact, "few if any composers have been known to employ relations available to the collection as extensively or in as varied a manner as Stravinsky". The second movement of Stravinsky's Octet for wind instruments opens with what Stephen Walsh calls "a broad melody completely in the octatonic scale". Jonathan Cross describes a highly rhythmic passage in the first movement of the Symphony in Three Movements as "gloriously octatonic, not an unfamiliar situation in jazz, where this mode is known as the 'diminished scale', but Stravinsky of course knew it from Rimsky. The 'rumba' passage... alternates chords of E-flat7 and C7, over and over, distantly recalling the coronation scene from Mussorgsky's Boris Godunov. In celebrating America, the émigré looked back once again to Russia." Van den Toorn catalogues many other octatonic moments in Stravinsky's music. The scale also may be found in music of Alexander Scriabin and Béla Bartók. In Bartók's Bagatelles, Fourth Quartet, Cantata Profana, and Improvisations, the octatonic is used with the diatonic, whole tone, and other "abstract pitch formations" all "entwined... in a very complex mixture". Mikrokosmos Nos. 99, 101, and 109 are octatonic pieces, as is No. 33 of the 44 Duos for Two Violins. "In each piece, changes of motive and phrase correspond to changes from one of the three octatonic scales to another, and one can easily select a single central and referential form of 8–28 in the context of each complete piece." However, even his larger pieces also feature "sections that are intelligible as 'octatonic music. Olivier Messiaen made frequent use of the octatonic scale throughout his career as a composer, and indeed in his seven modes of limited transposition, the octatonic scale is Mode 2. Peter Hill writes in detail about "La Colombe" (The Dove), the first of a set of Preludes for piano that Messiaen completed in 1929, at the age of 20. Hill speaks of a characteristic "merging of tonality (E major) with the octatonic mode" in this short piece. Other twentieth-century composers who used octatonic collections include Samuel Barber, Ernest Bloch, Benjamin Britten, Julian Cochran, George Crumb, Irving Fine, Ross Lee Finney, Alberto Ginastera, John Harbison, Jacques Hétu, Aram Khachaturian, Witold Lutosławski, Darius Milhaud, Henri Dutilleux, Robert Morris, Carl Orff, Jean Papineau-Couture, Krzysztof Penderecki, Francis Poulenc, Sergei Prokofiev, Alexander Scriabin, Dmitri Shostakovich, Toru Takemitsu, Joan Tower, Robert Xavier Rodriguez, John Williams and Frank Zappa. Other composers include Willem Pijper, who may have inferred the collection from Stravinsky's The Rite of Spring, which he greatly admired, and composed at least one piece—his Piano Sonatina No. 2—entirely in the octatonic system. In the 1920s, Heinrich Schenker criticized the use of the octatonic scale, specifically Stravinsky's Concerto for Piano and Wind Instruments, for the oblique relation between the diatonic scale and the harmonic and melodic surface. Jazz Both the half-whole diminished and its partner mode, the whole-half diminished (with a tone rather than a semitone beginning the pattern) are commonly used in jazz improvisation, frequently under different names. The whole-half diminished scale is commonly used in conjunction with diminished harmony (e.g., the Edim7 chord) while the half-whole scale is used in dominant harmony (e.g., with an F9 chord). Examples of octatonic jazz include Jaco Pastorius' composition "Opus Pocus" from the album Pastorius and Herbie Hancock's piano solo on "Freedom Jazz Dance" from the album Miles Smiles (1967). Rock and pop Jonny Greenwood of the English rock group Radiohead uses the octatonic scale extensively, such as in the song "Just" and his soundtrack for the film The Power of the Dog. He said "It's a slightly more grownup version of the pentatonic scales that we're all taught to do with xylophones and glockenspiels when you're a kid. It's not a major scale or a minor scale; it's something else. But all the notes work together and make a certain color that is its own thing." The scale is used in progressive heavy metal music such as that by Dream Theater and Opeth, both of which strive for a dissonant and tonally ambiguous sound in their music. Examples include the instrumental break in Dream Theater's Octavarium and Opeth's Deliverance. Earlier examples of the scale's use in progressive rock include King Crimson's Red and Emerson Lake & Palmer's The Barbarian. Progressive keyboardist Derek Sherinian is also closely associated with the octatonic scale, which can be found in most of his works, both solo and as part of a band. Examples include Planet X's Desert Girl and Sons of Apollo's King of Delusion. The dissonances associated with the scale when used in conjunction with conventional tonality form an integral part of his signature sound which has influenced hundreds of keyboardists of the 21st century. Harmonic implications Petrushka chord The Petrushka chord is a recurring polytonal device used in Igor Stravinsky's ballet Petrushka and in later music. In the Petrushka chord, two major triads, C major and F major – a tritone apart – clash, "horribly with each other", when sounded together and create a dissonant chord. The six-note chord is contained within an octatonic scale. French sixth and Mystic chord While used functionally as a pre-dominant chord in the classical period, late romantic composers saw the French sixth used as a dissonant and unstable chord. The chord can be built from the first, fourth, sixth and eighth degrees of the half-step/whole-step octatonic scale, and is transpositionally invariant about a tritone, a property somewhat contributing to its popularity. The octatonic collection contains two distinct French sixth chords a minor third apart, and since they share no notes, the scale can be thought of as the union of those two chords. For example, two French sixths based on G and E contain all the notes of an octatonic scale between them. The octatonic scale is used very frequently for melodic material above a French sixth chord throughout the 19th and 20th centuries, particularly in Russia, in the music of Rimsky-Korsakov, Mussorgsky, Scriabin and Stravinsky, but also outside Russia in the works of Debussy and Ravel. Examples include Rimsky's Scheherezade, Scriabin's Five Preludes, Op. 74, Debussy's Nuages and Ravel's Scarbo. All works are full of non-functional French sixths, and the octatonic scale is almost always the mode of choice. By adding a major sixth above the root, from within the scale, and a major second, from outside the scale, the new chord is the Mystic chord found in some of Scriabin's late works. While no longer transpositionally invariant, Scriabin teases the tritone symmetry of the French sixth in his music by alternating transpositions of the Mystic chord a tritone apart, implying the notes of an octatonic scale. Bitonality In Béla Bartók's piano piece, "Diminished Fifth" from Mikrokosmos, octatonic collections form the basis of the pitch content. In mm. 1–11, all eight pitch classes from the E diminished scale appear. In mm. 1–4, the pitch classes A, B, C, and D appear in the right hand, and the pitch classes E, F, G, and A are in the left hand. The collection in the right hand outlines the first four notes of an A minor scale, and the collection in the left hand outlines the first four notes of an E minor scale. In mm. 5–11, the left and right hand switch—the A minor tetrachord appears in the left hand, and the E minor tetrachord appears in the right hand. From this, one can see that Bartók has partitioned the octatonic collection into two (symmetrical) four-note segments of the natural minor scales a tritone apart. Paul Wilson argues against viewing this as bitonality since "the larger octatonic collection embraces and supports both supposed tonalities". Bartók also utilizes the two other octatonic collections so that all three possible octatonic collections are found throughout this piece (D, D, and E). In mm. 12–18, all eight pitch classes from the D octatonic collection are present. The E octatonic collection from mm. 1–11 is related to this D octatonic collection by the transposition operations, T, T4, T7, T10. In mm. 26–29, all eight pitch classes from the D octatonic collection appear. This collection is related to the E octatonic collection from mm. 1–11 by the following transposition operations: T2, T5, T8, T11. Other relevant features of the piece include the groups of three notes taken from the whole-half diminished scale in mm. 12–18. In these measures, the right hand features D, E, and G, the tetrachord without the 3rd (F). The left hand has the same tetrachord transposed down a tritone (G, A, C). In mm. 16, both hands transpose down three semitones to B, C, E and E, G, A respectively. Later on, in mm. 20, the right hand moves on to A− and the left back to E−. After repeating the structure of mm. 12–19 in mm. 29–34 the piece ends with the treble part returning to A− and the bass part returning to E. Alpha chord The alpha chord (α chord) collection is, "a vertically organized statement of the octatonic scale as two diminished seventh chords", such as: C–E–G–B–C–E–F–A. One of the most important subsets of the alpha collection, the alpha chord (Forte number: 4-17, pitch class prime form (0347)), such as E–G–C–E; using the theorist Ernő Lendvai's terminology, the C alpha chord may be considered a mistuned major chord or major/minor in first inversion (in this case, C major/minor). The number of semitones in the interval array of the alpha chord corresponds to the Fibonacci sequence. Beta chord The beta chord (β chord) is a five-note chord, formed from the first five notes of the alpha chord (integers: 0,3,6,9,11; notes: C, E, G, B, C). The beta chord can also occur in its reduced form, that is, limited to the characteristic tones (C, E, G, C and C, G, C). Forte number: 5-31B. The beta chord may be created from a diminished seventh chord by adding a diminished octave. It may be created from a major chord by adding the sharpened root (solfege: in C, di is C: C, E, G, C), or from a diminished triad by adding the root's major 7th (called a diminished major 7th, or C#Maj7. The diminished octave is inverted creates a minor ninth, creating a C(9) chord, a sound commonly heard in the V chord during an authentic cadence in a minor key. Gamma chord The gamma chord (γ chord) is 0,3,6,8,11 (Forte number 5-32A) It is the beta chord with one interval diminished: C, E, G, A, C. It may be considered a major-minor minor seventh chord on A: A, C, C, E, G. See also: Elektra chord. This is also commonly known as the Hendrix chord, or in jazz music as a Dominant 79 chord; the notes in this case creating an A79. Hungarian major and Romanian major The Hungarian major scale and Romanian major scale are both heptatonic subsets of the octatonic scale with one scale degree removed. The Hungarian major scale has the 2 degree removed, while the Romanian major scale has the 3 degree removed. See also 15 equal temperament has a ten-note analogue Complexe sonore Alpha scale Beta scale Delta scale Gamma scale List of pieces which use the octatonic scale References Sources . {{wikicite|ref=|reference=Kholopov, Yuri (1982). "Modal harmony. Modality as a type of harmonic structure". Art of Music. General Questions of Music Theory and Aesthetics: 16–31; Orig. title: Модальная гармония: Модальность как тип гармонической структуры // Музыкальное искусство. Общие вопросы теории и эстетики музыки. Ташкент: Издательство литературы и искусства им. Г. Гуляма |location=Tashkent}} Cited in . Cited in . Further reading Baur, Steven (1999). "Ravel's 'Russian' Period: Octatonicism in His Early Works, 1893–1908." Journal of the American Musicological Society 52, no. 1:. Berger, Arthur (1963). "Problems of Pitch Organization in Stravinsky". Perspectives of New Music 2, no. 1 (Fall–Winter): 11–42. Gillespie, Robert (2015). "Herbie Hancock: Freedom Jazz Dance Transcription". (Accessed 1 October 2015). Tymoczko, Dmitri (2002). "Stravinsky and the Octatonic: A Reconsideration". Music Theory Spectrum'' 24, no. 1 (Spring): 68–102. Wollner, Fritz (1924) "7 mysteries of Stravinsky in Progression" 1924 German international school of music study. Musical symmetry Post-tonal music theory Persian music
Octatonic scale
[ "Physics" ]
5,742
[ "Symmetry", "Musical symmetry" ]
175,996
https://en.wikipedia.org/wiki/Gray%20goo
Gray goo (also spelled as grey goo) is a hypothetical global catastrophic scenario involving molecular nanotechnology in which out-of-control self-replicating machines consume all biomass (and perhaps also everything else) on Earth while building many more of themselves, a scenario that has been called ecophagy . The original idea assumed machines were designed to have this capability, while popularizations have assumed that machines might somehow gain this capability by accident. Self-replicating machines of the macroscopic variety were originally described by mathematician John von Neumann, and are sometimes referred to as von Neumann machines or clanking replicators. The term gray goo was coined by nanotechnology pioneer K. Eric Drexler in his 1986 book Engines of Creation. In 2004, he stated "I wish I had never used the term 'gray goo'." Engines of Creation mentions "gray goo" as a thought experiment in two paragraphs and a note, while the popularized idea of gray goo was first publicized in a mass-circulation magazine, Omni, in November 1986. Definition The term was first used by molecular nanotechnology pioneer K. Eric Drexler in Engines of Creation (1986). In Chapter 4, Engines Of Abundance, Drexler illustrates both exponential growth and inherent limits (not gray goo) by describing "dry" nanomachines that can function only if given special raw materials: According to Drexler, the term was popularized by an article in science fiction magazine Omni, which also popularized the term "nanotechnology" in the same issue. Drexler says arms control is a far greater issue than gray goo "nanobugs". Drexler describes gray goo in Chapter 11 of Engines of Creation: Drexler notes that the geometric growth made possible by self-replication is inherently limited by the availability of suitable raw materials. Drexler used the term "gray goo" not to indicate color or texture, but to emphasize the difference between "superiority" in terms of human values and "superiority" in terms of competitive success: Bill Joy, one of the founders of Sun Microsystems, discussed some of the problems with pursuing this technology in his now-famous 2000 article in Wired magazine, titled "Why The Future Doesn't Need Us". In direct response to Joy's concerns, the first quantitative technical analysis of the ecophagy scenario was published in 2000 by nanomedicine pioneer Robert Freitas. Risks and precautions Drexler more recently conceded that there is no need to build anything that even resembles a potential runaway replicator. This would avoid the problem entirely. In a paper in the journal Nanotechnology, he argues that self-replicating machines are needlessly complex and inefficient. His 1992 technical book on advanced nanotechnologies Nanosystems: Molecular Machinery, Manufacturing, and Computation describes manufacturing systems that are desktop-scale factories with specialized machines in fixed locations and conveyor belts to move parts from place to place. None of these measures would prevent a party from creating a weaponized gray goo, were such a thing possible. King Charles III (then Prince of Wales) called upon the British Royal Society to investigate the "enormous environmental and social risks" of nanotechnology in a planned report, leading to much media commentary on gray goo. The Royal Society's report on nanoscience was released on 29 July 2004, and declared the possibility of self-replicating machines to lie too far in the future to be of concern to regulators. More recent analysis in the paper titled Safe Exponential Manufacturing from the Institute of Physics (co-written by Chris Phoenix, Director of Research of the Center for Responsible Nanotechnology, and Eric Drexler), shows that the danger of gray goo is far less likely than originally thought. However, other long-term major risks to society and the environment from nanotechnology have been identified. Drexler has made a somewhat public effort to retract his gray goo hypothesis, in an effort to focus the debate on more realistic threats associated with knowledge-enabled nanoterrorism and other misuses. In Safe Exponential Manufacturing, which was published in a 2004 issue of Nanotechnology, it was suggested that creating manufacturing systems with the ability to self-replicate by the use of their own energy sources would not be needed. The Foresight Institute also recommended embedding controls in the molecular machines. These controls would be able to prevent anyone from purposely abusing nanotechnology, and therefore avoid the gray goo scenario. Ethics and chaos Gray goo is a useful construct for considering low-probability, high-impact outcomes from emerging technologies. Thus, it is a useful tool in the ethics of technology. Daniel A. Vallero applied it as a worst-case scenario thought experiment for technologists contemplating possible risks from advancing a technology. This requires that a decision tree or event tree include even extremely low probability events if such events may have an extremely negative and irreversible consequence, i.e. application of the precautionary principle. Dianne Irving admonishes that "any error in science will have a rippling effect". Vallero adapted this reference to chaos theory to emerging technologies, wherein slight permutations of initial conditions can lead to unforeseen and profoundly negative downstream effects, for which the technologist and the new technology's proponents must be held accountable. In popular culture Grey goo is the basis for "Benderama", an episode of the animated science fiction sitcom Futurama. In this episode, Bender creates smaller copies of himself to accomplish mundane tasks, which quickly spirals out of control as those copies begin replicating themselves, eventually reaching a stage where the copies are small enough to manipulate matter at the subatomic level. The Horizon videogame series is set in the post-apocalyptic aftermath of a Gray goo scenario, where a self-replicating swarm of 'insectoid robots' end up devouring the Earth's biosphere rendering all life on the planet extinct. Humanity is reduced to living in scattered, primitive tribes after the species was revived via an automated terraforming system over the course of centuries. See also References Further reading Lynn Margulis and Dorion Sagan – What Is Life? (1995). Simon & Schuster. Bill Bryson A Short History of Nearly Everything (2003) Green Goo – Life in the Era of Humane Genocide by Nick Szabo Green Goo: Nanotechnology Comes Alive! Green Goo: The New Nanothreat from Wired Hello From the Wired: An Introduction to Cyber-Nihilism by N1x from The Anarchist Library External links Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations Safe exponential manufacturing Paper critical of "grey goo," summarized in article Nanotechnology pioneer slays "grey goo" myths Online edition of the Royal Society's report Nanoscience and nanotechnologies: opportunities and uncertainties UK Government & Royal Society commission on Nanotechnology and Nanoscience Nanotechnology: Drexler and Smalley make the case for and against 'molecular assemblers' (Richard Smalley argues that laws of chemistry imply it will be impossible to ever create "self-replicating nanobots" whose abilities to assemble molecules are significantly different than those of biological self-replicators. Some pro-nanobot responses to Smalley's argument can be found at Debate About Assemblers — Smalley Rebuttal, The Drexler-Smalley debate on molecular assembly, Of Chemistry, Nanobots, and Policy, and Is the Revolution Real?) Nanotechnology and the Grey Goo Problem, BBC Artificial life Doomsday scenarios 1980s neologisms Nanotechnology Multi-robot systems Self-replicating machines Thought experiments in ethics
Gray goo
[ "Physics", "Materials_science", "Technology", "Engineering", "Biology" ]
1,577
[ "Machines", "Self-replicating machines", "Materials science", "Self-replication", "Physical systems", "Nanotechnology" ]
176,052
https://en.wikipedia.org/wiki/Molecular%20evolution
Molecular evolution describes how inherited DNA and/or RNA change over evolutionary time, and the consequences of this for proteins and other components of cells and organisms. Molecular evolution is the basis of phylogenetic approaches to describing the tree of life. Molecular evolution overlaps with population genetics, especially on shorter timescales. Topics in molecular evolution include the origins of new genes, the genetic nature of complex traits, the genetic basis of adaptation and speciation, the evolution of development, and patterns and processes underlying genomic changes during evolution. History The history of molecular evolution starts in the early 20th century with comparative biochemistry, and the use of "fingerprinting" methods such as immune assays, gel electrophoresis, and paper chromatography in the 1950s to explore homologous proteins. The advent of protein sequencing allowed molecular biologists to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the most recent common ancestor. The surprisingly large amount of molecular divergence within and between species inspired the neutral theory of molecular evolution in the late 1960s. Neutral theory also provided a theoretical basis for the molecular clock, although this is not needed for the clock's validity. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life. The Society for Molecular Biology and Evolution was founded in 1982. Molecular phylogenetics Molecular phylogenetics uses DNA, RNA, or protein sequences to resolve questions in systematics, i.e. about their correct scientific classification from the point of view of evolutionary history. The result of a molecular phylogenetic analysis is expressed in a phylogenetic tree. Phylogenetic inference is conducted using data from DNA sequencing. This is aligned to identify which sites are homologous. A substitution model describes what patterns are expected to be common or rare. Sophisticated computational inference is then used to generate one or more plausible trees. Some phylogenetic methods account for variation among sites and among tree branches. Different genes, e.g. hemoglobin vs. cytochrome c, generally evolve at different rates. These rates are relatively constant over time (e.g., hemoglobin does not evolve at the same rate as cytochrome c, but hemoglobins from humans, mice, etc. do have comparable rates of evolution), although rapid evolution along one branch can indicate increased directional selection on that branch. Purifying selection causes functionally important regions to evolve more slowly, and amino acid substitutions involving similar amino acids occurs more often than dissimilar substitutions. Gene family evolution Gene duplication can produce multiple homologous proteins (paralogs) within the same species. Phylogenetic analysis of proteins has revealed how proteins evolve and change their structure and function over time. For example, ribonucleotide reductase (RNR) has evolved a multitude of structural and functional variants. Class I RNRs use a ferritin subunit and differ by the metal they use as cofactors. In class II RNRs, the thiyl radical is generated using an adenosylcobalamin cofactor and these enzymes do not require additional subunits (as opposed to class I which do). In class III RNRs, the thiyl radical is generated using S-adenosylmethionine bound to a [4Fe-4S] cluster. That is, within a single family of proteins numerous structural and functional mechanisms can evolve. In a proof-of-concept study, Bhattacharya and colleagues converted myoglobin, a non-enzymatic oxygen storage protein, into a highly efficient Kemp eliminase using only three mutations. This demonstrates that only few mutations are needed to radically change the function of a protein. Directed evolution is the attempt to engineer proteins using methods inspired by molecular evolution. Molecular evolution at one site Change at one locus begins with a new mutation, which might become fixed due to some combination of natural selection, genetic drift, and gene conversion. Mutation Mutations are permanent, transmissible changes to the genetic material (DNA or RNA) of a cell or virus. Mutations result from errors in DNA replication during cell division and by exposure to radiation, chemicals, other environmental stressors, viruses, or transposable elements. When point mutations to just one base-pair of the DNA fall within a region coding for a protein, they are characterized by whether they are synonymous (do not change the amino acid sequence) or non-synonymous. Other types of mutations modify larger segments of DNA and can cause duplications, insertions, deletions, inversions, and translocations. The distribution of rates for diverse kinds of mutations is called the "mutation spectrum" (see App. B of ). Mutations of different types occur at widely varying rates. Point mutation rates for most organisms are very low, roughly 10−9 to 10−8 per site per generation, though some viruses have higher mutation rates on the order of 10−6 per site per generation. Transitions (A ↔ G or C ↔ T) are more common than transversions (purine (adenine or guanine)) ↔ pyrimidine (cytosine or thymine, or in RNA, uracil)). Perhaps the most common type of mutation in humans is a change in the length of a short tandem repeat (e.g., the CAG repeats underlying various disease-associated mutations). Such STR mutations may occur at rates on the order of 10−3 per generation. Different frequencies of different types of mutations can play an important role in evolution via bias in the introduction of variation (arrival bias), contributing to parallelism, trends, and differences in the navigability of adaptive landscapes. Mutation bias makes systematic or predictable contributions to parallel evolution. Since the 1960s, genomic GC content has been thought to reflect mutational tendencies. Mutational biases also contribute to codon usage bias. Although such hypotheses are often associated with neutrality, recent theoretical and empirical results have established that mutational tendencies can influence both neutral and adaptive evolution via bias in the introduction of variation (arrival bias). Selection Selection can occur when an allele confers greater fitness, i.e. greater ability to survive or reproduce, on the average individual than carries it. A selectionist approach emphasizes e.g. that biases in codon usage are due at least in part to the ability of even weak selection to shape molecular evolution. Selection can also operate at the gene level at the expense of organismal fitness, resulting in intragenomic conflict. This is because there can be a selective advantage for selfish genetic elements in spite of a host cost. Examples of such selfish elements include transposable elements, meiotic drivers, and selfish mitochondria. Selection can be detected using the Ka/Ks ratio, the McDonald–Kreitman test. Rapid adaptive evolution is often found for genes involved in intragenomic conflict, sexual antagonistic coevolution, and the immune system. Genetic drift Genetic drift is the change of allele frequencies from one generation to the next due to stochastic effects of random sampling in finite populations. These effects can accumulate until a mutation becomes fixed in a population. For neutral mutations, the rate of fixation per generation is equal to the mutation rate per replication. A relatively constant mutation rate thus produces a constant rate of change per generation (molecular clock). Slightly deleterious mutations with a selection coefficient less than a threshold value of 1 / the effective population size can also fix. Many genomic features have been ascribed to accumulation of nearly neutral detrimental mutations as a result of small effective population sizes. With a smaller effective population size, a larger variety of mutations will behave as if they are neutral due to inefficiency of selection. Gene conversion Gene conversion occurs during recombination, when nucleotide damage is repaired using an homologous genomic region as a template. It can be a biased process, i.e. one allele may have a higher probability of being the donor than the other in a gene conversion event. In particular, GC-biased gene conversion tends to increase the GC-content of genomes, particularly in regions with higher recombination rates. There is also evidence for GC bias in the mismatch repair process. It is thought that this may be an adaptation to the high rate of methyl-cytosine deamination which can lead to C→T transitions. The dynamics of biased gene conversion resemble those of natural selection, in that a favored allele will tend to increase exponentially in frequency when rare. Genome architecture Genome size Genome size is influenced by the amount of repetitive DNA as well as number of genes in an organism. Some organisms, such as most bacteria, Drosophila, and Arabidopsis have particularly compact genomes with little repetitive content or non-coding DNA. Other organisms, like mammals or maize, have large amounts of repetitive DNA, long introns, and substantial spacing between genes. The C-value paradox refers to the lack of correlation between organism 'complexity' and genome size. Explanations for the so-called paradox are two-fold. First, repetitive genetic elements can comprise large portions of the genome for many organisms, thereby inflating DNA content of the haploid genome. Repetitive genetic elements are often descended from transposable elements. Secondly, the number of genes is not necessarily indicative of the number of developmental stages or tissue types in an organism. An organism with few developmental stages or tissue types may have large numbers of genes that influence non-developmental phenotypes, inflating gene content relative to developmental gene families. Neutral explanations for genome size suggest that when population sizes are small, many mutations become nearly neutral. Hence, in small populations repetitive content and other 'junk' DNA can accumulate without placing the organism at a competitive disadvantage. There is little evidence to suggest that genome size is under strong widespread selection in multicellular eukaryotes. Genome size, independent of gene content, correlates poorly with most physiological traits and many eukaryotes, including mammals, harbor very large amounts of repetitive DNA. However, birds likely have experienced strong selection for reduced genome size, in response to changing energetic needs for flight. Birds, unlike humans, produce nucleated red blood cells, and larger nuclei lead to lower levels of oxygen transport. Bird metabolism is far higher than that of mammals, due largely to flight, and oxygen needs are high. Hence, most birds have small, compact genomes with few repetitive elements. Indirect evidence suggests that non-avian theropod dinosaur ancestors of modern birds also had reduced genome sizes, consistent with endothermy and high energetic needs for running speed. Many bacteria have also experienced selection for small genome size, as time of replication and energy consumption are so tightly correlated with fitness. Chromosome number and organization The ant Myrmecia pilosula has only a single pair of chromosomes whereas the Adders-tongue fern Ophioglossum reticulatum has up to 1260 chromosomes. The number of chromosomes in an organism's genome does not necessarily correlate with the amount of DNA in its genome. The genome-wide amount of recombination is directly controlled by the number of chromosomes, with one crossover per chromosome or per chromosome arm, depending on the species. Changes in chromosome number can play a key role in speciation, as differing chromosome numbers can serve as a barrier to reproduction in hybrids. Human chromosome 2 was created from a fusion of two chimpanzee chromosomes and still contains central telomeres as well as a vestigial second centromere. Polyploidy, especially allopolyploidy, which occurs often in plants, can also result in reproductive incompatibilities with parental species. Agrodiatus blue butterflies have diverse chromosome numbers ranging from n=10 to n=134 and additionally have one of the highest rates of speciation identified to date. Cilliate genomes house each gene in individual chromosomes. Organelles In addition to the nuclear genome, endosymbiont organelles contain their own genetic material. Mitochondrial and chloroplast DNA varies across taxa, but membrane-bound proteins, especially electron transport chain constituents are most often encoded in the organelle. Chloroplasts and mitochondria are maternally inherited in most species, as the organelles must pass through the egg. In a rare departure, some species of mussels are known to inherit mitochondria from father to son. Origins of new genes New genes arise from several different genetic mechanisms including gene duplication, de novo gene birth, retrotransposition, chimeric gene formation, recruitment of non-coding sequence into an existing gene, and gene truncation. Gene duplication initially leads to redundancy. However, duplicated gene sequences can mutate to develop new functions or specialize so that the new gene performs a subset of the original ancestral functions. Retrotransposition duplicates genes by copying mRNA to DNA and inserting it into the genome. Retrogenes generally insert into new genomic locations, lack introns. and sometimes develop new expression patterns and functions. Chimeric genes form when duplication, deletion, or incomplete retrotransposition combine portions of two different coding sequences to produce a novel gene sequence. Chimeras often cause regulatory changes and can shuffle protein domains to produce novel adaptive functions. De novo gene birth can give rise to protein-coding genes and non-coding genes from previously non-functional DNA. For instance, Levine and colleagues reported the origin of five new genes in the D. melanogaster genome. Similar de novo origin of genes has been also shown in other organisms such as yeast, rice and humans. De novo genes may evolve from spurious transcripts that are already expressed at low levels. Constructive neutral evolution Constructive neutral evolution (CNE) explains that complex systems can emerge and spread into a population through neutral transitions with the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities. Journals and societies The Society for Molecular Biology and Evolution publishes the journals "Molecular Biology and Evolution" and "Genome Biology and Evolution" and holds an annual international meeting. Other journals dedicated to molecular evolution include Journal of Molecular Evolution and Molecular Phylogenetics and Evolution. Research in molecular evolution is also published in journals of genetics, molecular biology, genomics, systematics, and evolutionary biology. See also Evolution E. coli long-term evolution experiment Evolutionary physiology Genomic organization Genome evolution Heterotachy History of molecular evolution Horizontal gene transfer Human evolution Molecular clock Molecular paleontology Nearly neutral theory of molecular evolution Neutral theory of molecular evolution Nucleotide diversity Phylogenetic comparative methods Phylogenetics Population genetics Selection References Further reading Category: molecular evolution (kimura 1968)
Molecular evolution
[ "Chemistry", "Biology" ]
3,076
[ "Evolutionary processes", "Molecular evolution", "Molecular biology" ]
176,053
https://en.wikipedia.org/wiki/Angle%20of%20view%20%28photography%29
In photography, angle of view (AOV) describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view. It is important to distinguish the angle of view from the angle of coverage, which describes the angle range that a lens can image on a given image sensor or film location (the image plane). In other words, the angle of coverage is determined by the lens and the image plane while the angle of view (AOV) is decided by not only them but also the film or image sensor size. The image circle (giving the angle of coverage) produced by a lens on a given image plane is typically large enough to completely cover a film or sensor at the plane, possibly including some vignetting toward the edge. If the angle of coverage of the lens does not fill the sensor, the image circle will be visible, typically with strong vignetting toward the edge, and the effective angle of view will be limited to the angle of coverage. As abovementioned, a camera's angle of view depends not only on the lens, but also on the image sensor or film. Digital sensors are usually smaller than 35 mm film, and this causes the lens to have a narrower angle of view than with 35 mm film, by a constant factor for each sensor (called the crop factor). In everyday digital cameras, the crop factor can range from around 1, called full frame (professional digital SLRs where the sensor size is similar to the 35 mm film), to 1.6 (consumer SLR), to 2 (Micro Four Thirds ILC), and to 6 (most compact cameras). So, a standard 50 mm lens for 35 mm film photography acts like a 50 mm standard "film" lens on a professional digital SLR (with crop factor = 1) and would act closer to an 80 mm lens (= 1.6 × 50 mm) on many mid-market DSLRs (with crop factor = 1.6). Similarly, the 40-degree angle of view of a standard 50 mm lens on a 35 mm film camera is equivalent to an 80 mm lens on many digital SLRs (again, crop factor = 1.6). Calculating a camera's angle of view For lenses projecting rectilinear (non-spatially distorted) images of distant objects, the effective focal length and the image format dimensions completely define the angle of view. Calculations for lenses producing non-rectilinear images are much more complex and, in the end, not very useful in most practical applications. (In the case of a lens with distortion, e.g., a fisheye lens, a longer lens with distortion can have a wider angle of view than a shorter lens with low distortion) Angle of view may be measured horizontally (from the left to right edge of the frame), vertically (from the top to bottom of the frame), or diagonally (from one corner of the frame to its opposite corner). For a lens projecting a rectilinear image (focused at infinity, see derivation), the angle of view (α) can be calculated from the chosen dimension (d), and effective focal length (f) (f is defined as the distance of the lens with respect to the image plane. For a thick lens, it is the distance of the rear principal plane of the lens w.r.t the image plane) as follows: represents the size of the film (or sensor) in the direction measured (see below: sensor effects). For example, for 35 mm film which is 36 mm wide and 24 mm high, would be used to obtain the horizontal angle of view and for the vertical angle. Because this is a trigonometric function, the angle of view does not vary quite linearly with the reciprocal of the focal length. However, except for wide-angle lenses, it is reasonable to approximate radians or degrees. The effective focal length is nearly equal to the stated focal length of the lens (F), except in macro photography where the lens-to-object distance is comparable to the focal length. In this case, the absolute transverse magnification factor (m) () must be taken into account: (In photography, the magnification is usually defined to be positive, despite the inverted image.) For example, with a magnification ratio of 1:2, we find and thus the angle of view is reduced by 33% compared to focusing on a distant object with the same lens. Angle of view can also be determined using FOV tables or paper or software lens calculators. Example Consider a 35 mm camera with a lens having a focal length of . The dimensions of the 35 mm image format are 24 mm (vertically) × 36 mm (horizontal), giving a diagonal of about 43.3 mm. At infinity focus, , the angles of view are: horizontally, vertically, diagonally, Derivation of the angle-of-view formula Consider a rectilinear lens in a camera used to photograph an object at a distance , and forming an image that just barely fits in the dimension, , of the frame (the film or image sensor). Treat the lens as if it were a pinhole at distance from the image plane (technically, the center of perspective of a rectilinear lens is at the center of its entrance pupil where chief rays are meet): Now is the angle between the optical axis of the lens and the ray joining its optical center to the edge of the film. Here is defined to be the angle-of-view, since it is the angle enclosing the largest object whose image can fit on the film. We want to find the relationship between: the angle the "opposite" side of the right triangle, (half the film-format dimension) the "adjacent" side, (distance from the lens to the image plane) Using basic trigonometry, we find:which we can solve for α, giving: To project a sharp image of distant objects, needs to be equal to the focal length, , which is attained by setting the lens for infinity focus. Then the angle of view is given by: Note that the angle of view varies slightly when the focus is not at infinity (See breathing (lens)), given by as a rearrangement of the lens equation. Macro photography For macro photography, we cannot neglect the difference between and . From the lens formula, The absolute transverse magnification (the absolute ratio of the image height to the object height) can be expressed , we can substitute and with some algebra find: Defining as the "effective focal length", we get the formula presented above: where . A second effect which comes into play in macro photography is lens asymmetry (an asymmetric lens is a lens where the aperture appears to have different dimensions when viewed from the front and from the back). The lens asymmetry causes an offset between the nodal plane and pupil positions. The effect can be quantified using the ratio (P) between apparent exit pupil diameter and entrance pupil diameter. The full formula for angle of view now becomes: Measuring a camera's field of view In the optical instrumentation industry the term field of view (FOV) is most often used, though the measurements are still expressed as angles. Optical tests are commonly used for measuring the FOV of UV, visible, and infrared (wavelengths about 0.1–20 μm in the electromagnetic spectrum) sensors and cameras. The purpose of this test is to measure the horizontal and vertical FOV of a lens and sensor used in an imaging system, when the lens focal length or sensor size is not known (that is, when the calculation above is not immediately applicable). Although this is one typical method that the optics industry uses to measure the FOV, there exist many other possible methods. UV/visible light from an integrating sphere (and/or other source such as a black body) is focused onto a square test target at the focal plane of a collimator (the mirrors in the diagram), such that a virtual image of the test target will be seen infinitely far away by the camera under test. The camera under test senses a real image of the virtual image of the target, and the sensed image is displayed on a monitor. The sensed image, which includes the target, is displayed on a monitor, where it can be measured. Dimensions of the full image display and of the portion of the image that is the target are determined by inspection (measurements are typically in pixels, but can just as well be inches or cm). = dimension of full image = dimension of image of target The collimator's distant virtual image of the target subtends a certain angle, referred to as the angular extent of the target, that depends on the collimator focal length and the target size. Assuming the sensed image includes the whole target, the angle seen by the camera, its FOV, is this angular extent of the target times the ratio of full image size to target image size. The target's angular extent is: where is the dimension of the target and is the focal length of collimator. The total field of view is then approximately: or more precisely, if the imaging system is rectilinear: This calculation could be a horizontal or a vertical FOV, depending on how the target and image are measured. Lens types and effects Focal length Lenses are often referred to by terms that express their angle of view: Fisheye lenses, typical focal lengths are between 8 mm and 10 mm for circular images, and 15–16 mm for full-frame images. Up to 180° and beyond. A circular fisheye lens (as opposed to a full-frame fisheye) is an example of a lens where the angle of coverage is less than the angle of view. The image projected onto the film is circular because the diameter of the image projected is narrower than that needed to cover the widest portion of the film. Ultra wide angle lens is a rectilinear which is less than 24 mm of focal length in 35 mm film format, here 14 mm gives 114° and 24 mm gives 84° . Wide-angle lenses (24–35 mm in 35 mm film format) cover between 84° and 64° Normal, or Standard lenses (36–60 mm in 35 mm film format) cover between 62° and 40° Long focus lenses (any lens with a focal length greater than the diagonal of the film or sensor used) generally have an angle of view of 35° or less. Since photographers usually only encounter the telephoto lens sub-type, they are referred to in common photographic parlance as: "Medium telephoto", a focal length of 85 mm to 250 mm in 35 mm film format covering between 30° and 10° "Super telephoto" (over 300 mm in 35 mm film format) generally cover between 8° through less than 1° Zoom lenses are a special case wherein the focal length, and hence angle of view, of the lens can be altered mechanically without removing the lens from the camera. Characteristics For a given camera–subject distance, longer lenses magnify the subject more. For a given subject magnification (and thus different camera–subject distances), longer lenses appear to compress distance; wider lenses appear to expand the distance between objects. Another result of using a wide angle lens is a greater apparent perspective distortion when the camera is not aligned perpendicularly to the subject: parallel lines converge at the same rate as with a normal lens, but converge more due to the wider total field. For example, buildings appear to be falling backwards much more severely when the camera is pointed upward from ground level than they would if photographed with a normal lens at the same distance from the subject, because more of the subject building is visible in the wide-angle shot. Because different lenses generally require a different camera–subject distance to preserve the size of a subject, changing the angle of view can indirectly distort perspective, changing the apparent relative size of the subject and foreground. If the subject image size remains the same, then at any given aperture all lenses, wide angle and long lenses, will give the same depth of field. Examples An example of how lens choice affects angle of view. Common lens angles of view This table shows the diagonal, horizontal, and vertical angles of view, in degrees, for lenses producing rectilinear images, when used with 36 mm × 24 mm format (that is, 135 film or full-frame 35 mm digital using width 36 mm, height 24 mm, and diagonal 43.3 mm for d in the formula above). Digital compact cameras sometimes state the focal lengths of their lenses in 35 mm equivalents, which can be used in this table. For comparison, the human visual system perceives an angle of view of about 140° by 80°. Sensor size effects ("crop factor") As noted above, a camera's angle level of view depends not only on the lens, but also on the sensor used. Digital sensors are usually smaller than 35 mm film, causing the lens to usually behave as a longer focal length lens would behave, and have a narrower angle of view than with 35 mm film, by a constant factor for each sensor (called the crop factor). In everyday digital cameras, the crop factor can range from around 1 (professional digital SLRs), to 1.6 (mid-market SLRs), to around 3 to 6 for compact cameras. So a standard 50 mm lens for 35 mm photography acts like a 50 mm standard "film" lens even on a professional digital SLR, but would act closer to a 75 mm (1.5×50 mm Nikon) or 80 mm lens (1.6×50mm Canon) on many mid-market DSLRs, and the 40-degree angle of view of a standard 50 mm lens on a film camera is equivalent to a 28–35 mm lens on many digital SLRs. The table below shows the horizontal, vertical and diagonal angles of view, in degrees, when used with 22.2 mm × 14.8 mm format (that is Canon's DSLR APS-C frame size) and a diagonal of 26.7 mm. Cinematography and video gaming Modifying the angle of view over time (known as zooming), is a frequently used cinematic technique, often combined with camera movement to produce a "dolly zoom" effect, made famous by the film Vertigo. Using a wide angle of view can exaggerate the camera's perceived speed, and is a common technique in tracking shots, phantom rides, and racing video games. See also Field of view in video games. See also 35 mm equivalent focal length Camera angle Camera coverage Camera operator Cinematic techniques Field of view Filmmaking Multiple-camera setup Single-camera setup Video production Image sensor format Crop factor Ultrawide formats Notes and references External links Simple Explanation of Angle of View and Focal Length Angle of View on digital SLR cameras with reduced sensor size Focal Length and Angle of View Science of photography Geometrical optics Angle
Angle of view (photography)
[ "Physics" ]
3,090
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Wikipedia categories named after physical quantities", "Angle" ]
176,159
https://en.wikipedia.org/wiki/Polymer%20physics
Polymer physics is the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation of polymers and polymerisation of monomers. While it focuses on the perspective of condensed matter physics, polymer physics was originally a branch of statistical physics. Polymer physics and polymer chemistry are also related to the field of polymer science, which is considered to be the applicative part of polymers. Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with many monomers) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite). Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires the use of principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on. The statistical approach to polymer physics is based on an analogy between polymer behavior and either Brownian motion or another type of a random walk, the self-avoiding walk. The simplest possible polymer model is presented by the ideal chain, corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using polymer characterization methods, such as size exclusion chromatography, viscometry, dynamic light scattering, and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP) for determining the chemical, physical, and material properties of polymers. These experimental methods help the mathematical modeling of polymers and give a better understanding of the properties of polymers. Flory is considered the first scientist establishing the field of polymer physics. French scientists contributed since the 70s (e.g. Pierre-Gilles de Gennes, J. des Cloizeaux). Doi and Edwards wrote a famous book in polymer physics. Soviet/Russian school of physics (I. M. Lifshitz, A. Yu. Grosberg, A.R. Khokhlov, V.N. Pokrovskii) have been very active in the development of polymer physics. Models Models of polymer chains are split into two types: "ideal" models, and "real" models. Ideal chain models assume that there are no interactions between chain monomers. This assumption is valid for certain polymeric systems, where the positive and negative interactions between the monomer effectively cancel out. Ideal chain models provide a good starting point for the investigation of more complex systems and are better suited for equations with more parameters. Ideal chains The freely-jointed chain is the simplest model of a polymer. In this model, fixed length polymer segments are linearly connected, and all bond and torsion angles are equiprobable. The polymer can therefore be described by a simple random walk and ideal chain. The model can be extended to include extensible segments in order to represent bond stretching. The freely-rotating chain improves the freely-jointed chain model by taking into account that polymer segments make a fixed bond angle to neighbouring units because of specific chemical bonding. Under this fixed angle, the segments are still free to rotate and all torsion angles are equally likely. The hindered rotation model assumes that the torsion angle is hindered by a potential energy. This makes the probability of each torsion angle proportional to a Boltzmann factor: , where is the potential determining the probability of each value of . In the rotational isomeric state model, the allowed torsion angles are determined by the positions of the minima in the rotational potential energy. Bond lengths and bond angles are constant. The Worm-like chain is a more complex model. It takes the persistence length into account. Polymers are not completely flexible; bending them requires energy. At the length scale below persistence length, the polymer behaves more or less like a rigid rod. The finite extensible nonlinear elastic model takes into account non-linearity for finite chains. It is used for computational simulations. Real chains Interactions between chain monomers can be modelled as excluded volume. This causes a reduction in the conformational possibilities of the chain, and leads to a self-avoiding random walk. Self-avoiding random walks have different statistics to simple random walks. Solvent and temperature effect The statistics of a single polymer chain depends upon the solubility of the polymer in the solvent. For a solvent in which the polymer is very soluble (a "good" solvent), the chain is more expanded, while for a solvent in which the polymer is insoluble or barely soluble (a "bad" solvent), the chain segments stay close to each other. In the limit of a very bad solvent the polymer chain merely collapses to form a hard sphere, while in a good solvent the chain swells in order to maximize the number of polymer-fluid contacts. For this case the radius of gyration is approximated using Flory's mean field approach which yields a scaling for the radius of gyration of: , where is the radius of gyration of the polymer, is the number of bond segments (equal to the degree of polymerization) of the chain and is the Flory exponent. For good solvent, ; for poor solvent, . Therefore, polymer in good solvent has larger size and behaves like a fractal object. In bad solvent it behaves like a solid sphere. In the so-called solvent, , which is the result of simple random walk. The chain behaves as if it were an ideal chain. The quality of solvent depends also on temperature. For a flexible polymer, low temperature may correspond to poor quality and high temperature makes the same solvent good. At a particular temperature called theta (θ) temperature, the solvent behaves as an ideal chain. Excluded volume interaction The ideal chain model assumes that polymer segments can overlap with each other as if the chain were a phantom chain. In reality, two segments cannot occupy the same space at the same time. This interaction between segments is called the excluded volume interaction. The simplest formulation of excluded volume is the self-avoiding random walk, a random walk that cannot repeat its previous path. A path of this walk of N steps in three dimensions represents a conformation of a polymer with excluded volume interaction. Because of the self-avoiding nature of this model, the number of possible conformations is significantly reduced. The radius of gyration is generally larger than that of the ideal chain. Flexibility and reptation Whether a polymer is flexible or not depends on the scale of interest. For example, the persistence length of double-stranded DNA is about 50 nm. Looking at length scale smaller than 50 nm, it behaves more or less like a rigid rod. At length scale much larger than 50 nm, it behaves like a flexible chain. Reptation is the thermal motion of very long linear, entangled basically macromolecules in polymer melts or concentrated polymer solutions. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. The consistent theory of thermal motion of polymers was given by Vladimir Pokrovskii . Similar phenomena also occur in proteins. Example model (simple random-walk, freely jointed) The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behavior of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random walk, or diffusive walk, in space. Indeed, the Schrödinger equation is itself a diffusion equation in imaginary time, t' = it. Random walks in time The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk. Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of +b or −b (b is the same for each step), depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where Si is the ith step taken): ; due to a priori equal probabilities The second quantity is known as the correlation function. The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of b2. This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0; As stated , so the sum is still 0. It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below From the diffusion equation it can be shown that the distance a diffusing particle moves in a medium is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process. Random walks in space Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers. There are two types of random walk in space: self-avoiding random walks, where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles. By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is where ri is the vector position of the i-th link in the chain. As a result of the central limit theorem, if N ≫ 1 then we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves; ; by the isotropy of space ; all the links in the chain are uncorrelated with one another Using the statistics of the individual links, it is easily shown that . Notice this last result is the same as that found for random walks in time. Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz; where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to R = 0. Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering where F is the Helmholtz free energy, and it can be shown that which has the same form as the potential energy of a spring, obeying Hooke's law. This result is known as the entropic spring result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring, except that unlike the case with a metal spring, all of the work done appears immediately as thermal energy, much as in the thermodynamically similar case of compressing an ideal gas in a piston. It might at first be astonishing that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching. However, this is typical of systems that do not store any energy as potential energy, such as ideal gases. That such systems are entirely driven by entropy changes at a given temperature, can be seen whenever it is the case that are allowed to do work on the surroundings (such as when an elastic band does work on the environment by contracting, or an ideal gas does work on the environment by expanding). Because the free energy change in such cases derives entirely from entropy change rather than internal (potential) energy conversion, in both cases the work done can be drawn entirely from thermal energy in the polymer, with 100% efficiency of conversion of thermal energy to work. In both the ideal gas and the polymer, this is made possible by a material entropy increase from contraction that makes up for the loss of entropy from absorption of the thermal energy, and cooling of the material. See also File dynamics Important publications in polymer physics. Polymer characterization Protein dynamics Reptation Soft matter Flory–Huggins solution theory Time–temperature superposition References External links Plastic & polymer formulations Statistical mechanics
Polymer physics
[ "Physics", "Chemistry", "Materials_science" ]
2,871
[ "Polymer physics", "Statistical mechanics", "Polymer chemistry" ]
176,244
https://en.wikipedia.org/wiki/Geoid
The geoid ( ) is the shape that the ocean surface would take under the influence of the gravity of Earth, including gravitational attraction and Earth's rotation, if other influences such as winds and tides were absent. This surface is extended through the continents (such as might be approximated with very narrow hypothetical canals). According to Carl Friedrich Gauss, who first described it, it is the "mathematical figure of the Earth", a smooth but irregular surface whose shape results from the uneven distribution of mass within and on the surface of Earth. It can be known only through extensive gravitational measurements and calculations. Despite being an important concept for almost 200 years in the history of geodesy and geophysics, it has been defined to high precision only since advances in satellite geodesy in the late 20th century. The geoid is often expressed as a geoid undulation or geoidal height above a given reference ellipsoid, which is a slightly flattened sphere whose equatorial bulge is caused by the planet's rotation. Generally the geoidal height rises where the Earth's material is locally more dense and exerts greater gravitational force than the surrounding areas. The geoid in turn serves as a reference coordinate surface for various vertical coordinates, such as orthometric heights, geopotential heights, and dynamic heights (see Geodesy#Heights). All points on a geoid surface have the same geopotential (the sum of gravitational potential energy and centrifugal potential energy). At this surface, apart from temporary tidal fluctuations, the force of gravity acts everywhere perpendicular to the geoid, meaning that plumb lines point perpendicular and bubble levels are parallel to the geoid. Being an equigeopotential means the geoid corresponds to the free surface of water at rest (if only the Earth's gravity and rotational acceleration were at work); this is also a sufficient condition for a ball to remain at rest instead of rolling over the geoid. Earth's gravity acceleration (the vertical derivative of geopotential) is thus non-uniform over the geoid. Description The geoid surface is irregular, unlike the reference ellipsoid (which is a mathematical idealized representation of the physical Earth as an ellipsoid), but is considerably smoother than Earth's physical surface. Although the "ground" of the Earth has excursions on the order of +8,800 m (Mount Everest) and −11,000 m (Marianas Trench), the geoid's deviation from an ellipsoid ranges from +85 m (Iceland) to −106 m (southern India), less than 200 m total. If the ocean were of constant density and undisturbed by tides, currents or weather, its surface would resemble the geoid. The permanent deviation between the geoid and mean sea level is called ocean surface topography. If the continental land masses were crisscrossed by a series of tunnels or canals, the sea level in those canals would also very nearly coincide with the geoid. Geodesists are able to derive the heights of continental points above the geoid by spirit leveling. Being an equipotential surface, the geoid is, by definition, a surface upon which the force of gravity is perpendicular everywhere, apart from temporary tidal fluctuations. This means that when traveling by ship, one does not notice the undulation of the geoid; neglecting tides, the local vertical (plumb line) is always perpendicular to the geoid and the local horizon tangential to it. Likewise, spirit levels will always be parallel to the geoid. Simplified example Earth's gravitational field is not uniform. An oblate spheroid is typically used as the idealized Earth, but even if the Earth were spherical and did not rotate, the strength of gravity would not be the same everywhere because density varies throughout the planet. This is due to magma distributions, the density and weight of different geological compositions in the Earth's crust, mountain ranges, deep sea trenches, crust compaction due to glaciers, and so on. If that sphere were then covered in water, the water would not be the same height everywhere. Instead, the water level would be higher or lower with respect to Earth's center, depending on the integral of the strength of gravity from the center of the Earth to that location. The geoid level coincides with where the water would be. Generally the geoid rises where the Earth's material is locally more dense, exerts greater gravitational force, and pulls more water from the surrounding area. Formulation The geoid undulation (also known as geoid height or geoid anomaly), N, is the height of the geoid relative to a given ellipsoid of reference. The undulation is not standardized, as different countries use different mean sea levels as reference, but most commonly refers to the EGM96 geoid. In maps and common use, the height over the mean sea level (such as orthometric height, H) is used to indicate the height of elevations while the ellipsoidal height, h, results from the GPS system and similar GNSS: (An analogous relationship exists between normal heights and the quasigeoid, which disregards local density variations.) In practice, many handheld GPS receivers interpolate N in a pre-computed geoid map (a lookup table). So a GPS receiver on a ship may, during the course of a long voyage, indicate height variations, even though the ship will always be at sea level (neglecting the effects of tides). That is because GPS satellites, orbiting about the center of gravity of the Earth, can measure heights only relative to a geocentric reference ellipsoid. To obtain one's orthometric height, a raw GPS reading must be corrected. Conversely, height determined by spirit leveling from a tide gauge, as in traditional land surveying, is closer to orthometric height. Modern GPS receivers have a grid implemented in their software by which they obtain, from the current position, the height of the geoid (e.g., the EGM96 geoid) over the World Geodetic System (WGS) ellipsoid. They are then able to correct the height above the WGS ellipsoid to the height above the EGM96 geoid. When height is not zero on a ship, the discrepancy is due to other factors such as ocean tides, atmospheric pressure (meteorological effects), local sea surface topography, and measurement uncertainties. Determination The undulation of the geoid N is closely related to the disturbing potential T according to Bruns' formula (named after Heinrich Bruns): where is the force of normal gravity, computed from the normal field potential . Another way of determining N is using values of gravity anomaly , differences between true and normal reference gravity, as per (or Stokes' integral), published in 1849 by George Gabriel Stokes: The integral kernel S, called Stokes function, was derived by Stokes in closed analytical form. Note that determining anywhere on Earth by this formula requires to be known everywhere on Earth, including oceans, polar areas, and deserts. For terrestrial gravimetric measurements this is a near-impossibility, in spite of close international co-operation within the International Association of Geodesy (IAG), e.g., through the International Gravity Bureau (BGI, Bureau Gravimétrique International). Another approach for geoid determination is to combine multiple information sources: not just terrestrial gravimetry, but also satellite geodetic data on the figure of the Earth, from analysis of satellite orbital perturbations, and lately from satellite gravity missions such as GOCE and GRACE. In such combination solutions, the low-resolution part of the geoid solution is provided by the satellite data, while a 'tuned' version of the above Stokes equation is used to calculate the high-resolution part, from terrestrial gravimetric data from a neighbourhood of the evaluation point only. Calculating the undulation is mathematically challenging. The precise geoid solution by Petr Vaníček and co-workers improved on the Stokesian approach to geoid computation. Their solution enables millimetre-to-centimetre accuracy in geoid computation, an order-of-magnitude improvement from previous classical solutions. Geoid undulations display uncertainties which can be estimated by using several methods, e.g., least-squares collocation (LSC), fuzzy logic, artificial neural networks, radial basis functions (RBF), and geostatistical techniques. Geostatistical approach has been defined as the most-improved technique in prediction of geoid undulation. Relationship to mass density Variations in the height of the geoidal surface are related to anomalous density distributions within the Earth. Geoid measures thus help understanding the internal structure of the planet. Synthetic calculations show that the geoidal signature of a thickened crust (for example, in orogenic belts produced by continental collision) is positive, opposite to what should be expected if the thickening affects the entire lithosphere. Mantle convection also changes the shape of the geoid over time. The surface of the geoid is higher than the reference ellipsoid wherever there is a positive gravity anomaly or negative disturbing potential (mass excess) and lower than the reference ellipsoid wherever there is a negative gravity anomaly or positive disturbing potential (mass deficit). This relationship can be understood by recalling that gravity potential is defined so that it has negative values and is inversely proportional to distance from the body. So, while a mass excess will strengthen the gravity acceleration, it will decrease the gravity potential. As a consequence, the geoid's defining equipotential surface will be found displaced away from the mass excess. Analogously, a mass deficit will weaken the gravity pull but will increase the geopotential at a given distance, causing the geoid to move towards the mass deficit. The presence of a localized inclusion in the background medium will rotate the gravity acceleration vectors slightly towards or away from a denser or lighter body, respectively, causing a bump or dimple in the equipotential surface. The largest absolute deviation can be found in the Indian Ocean Geoid Low, 106 meters below the average sea level. Another large feature is the North Atlantic Geoid High (or North Atlantic Geoid Swell), caused in part by the weight of ice cover over North America and northern Europe in the Late Cenozoic Ice Age. Temporal change Recent satellite missions, such as the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and GRACE, have enabled the study of time-variable geoid signals. The first products based on GOCE satellite data became available online in June 2010, through the European Space Agency. ESA launched the satellite in March 2009 on a mission to map Earth's gravity with unprecedented accuracy and spatial resolution. On 31 March 2011, a new geoid model was unveiled at the Fourth International GOCE User Workshop hosted at the Technical University of Munich, Germany. Studies using the time-variable geoid computed from GRACE data have provided information on global hydrologic cycles, mass balances of ice sheets, and postglacial rebound. From postglacial rebound measurements, time-variable GRACE data can be used to deduce the viscosity of Earth's mantle. Spherical harmonics representation Spherical harmonics are often used to approximate the shape of the geoid. The current best such set of spherical harmonic coefficients is EGM2020 (Earth Gravitational Model 2020), determined in an international collaborative project led by the National Imagery and Mapping Agency (now the National Geospatial-Intelligence Agency, or NGA). The mathematical description of the non-rotating part of the potential function in this model is: where and are geocentric (spherical) latitude and longitude respectively, are the fully normalized associated Legendre polynomials of degree and order , and and are the numerical coefficients of the model based on measured data. The above equation describes the Earth's gravitational potential , not the geoid itself, at location the co-ordinate being the geocentric radius, i.e., distance from the Earth's centre. The geoid is a particular equipotential surface, and is somewhat involved to compute. The gradient of this potential also provides a model of the gravitational acceleration. The most commonly used EGM96 contains a full set of coefficients to degree and order 360 (i.e., ), describing details in the global geoid as small as 55 km (or 110 km, depending on the definition of resolution). The number of coefficients, and , can be determined by first observing in the equation for that for a specific value of there are two coefficients for every value of except for . There is only one coefficient when since . There are thus coefficients for every value of . Using these facts and the formula, , it follows that the total number of coefficients is given by using the EGM96 value of . For many applications, the complete series is unnecessarily complex and is truncated after a few (perhaps several dozen) terms. Still, even higher resolution models have been developed. Many of the authors of EGM96 have published EGM2008. It incorporates much of the new satellite gravity data (e.g., the Gravity Recovery and Climate Experiment), and supports up to degree and order 2160 (1/6 of a degree, requiring over 4 million coefficients), with additional coefficients extending to degree 2190 and order 2159. EGM2020 is the international follow-up that was originally scheduled for 2020 (still unreleased in 2024), containing the same number of harmonics generated with better data. See also Deflection of the vertical Geodetic datum Geopotential International Terrestrial Reference Frame Physical geodesy Planetary geoid Areoid (Mars's geoid) Selenoid (Moon's geoid) References Further reading External links NGA webpage on Earth Gravitational Models NASA webpage on EGM96 NOAA webpage on Geoid Models International Centre for Global Earth Models (ICGEM) International Service for the Geoid (ISG) Gravimetry Geodesy Vertical datums Vertical position
Geoid
[ "Physics", "Mathematics" ]
2,931
[ "Vertical position", "Physical quantities", "Distance", "Applied mathematics", "Geodesy" ]
176,304
https://en.wikipedia.org/wiki/Enantiomer
In chemistry, an enantiomer (/ɪˈnænti.əmər, ɛ-, -oʊ-/ ih-NAN-tee-ə-mər), also known as an optical isomer, antipode, or optical antipode, is one of a pair of molecular entities which are mirror images of each other and non-superposable. Enantiomer molecules are like right and left hands: one cannot be superposed onto the other without first being converted to its mirror image. It is solely a relationship of chirality and the permanent three-dimensional relationships among molecules or other chemical structures: no amount of re-orientiation of a molecule as a whole or conformational change converts one chemical into its enantiomer. Chemical structures with chirality rotate plane-polarized light. A mixture of equal amounts of each enantiomer, a racemic mixture or a racemate, does not rotate light. Stereoisomers include both enantiomers and diastereomers. Diastereomers, like enantiomers, share the same molecular formula and are also nonsuperposable onto each other; however, they are not mirror images of each other. Naming conventions There are three common naming conventions for specifying one of the two enantiomers (the absolute configuration) of a given chiral molecule: the R/S system is based on the geometry of the molecule; the (+)- and (−)- system (also written using the obsolete equivalents d- and l-) is based on its optical rotation properties; and the D/L system is based on the molecule's relationship to enantiomers of glyceraldehyde. The R/S system is based on the molecule's geometry with respect to a chiral center. The R/S system is assigned to a molecule based on the priority rules assigned by Cahn–Ingold–Prelog priority rules, in which the group or atom with the largest atomic number is assigned the highest priority and the group or atom with the smallest atomic number is assigned the lowest priority. The (+) or (−) symbol is used to specify a molecule's optical rotation — the direction in which the polarization of light rotates as it passes through a solution containing the molecule. When a molecule is denoted dextrorotatory, it rotates the plane of polarized light clockwise and can also be denoted as (+). When it is denoted as levorotatory, it rotates the plane of polarized light counterclockwise and can also be denoted as (−). The Latin words for left are laevus and sinister, and the word for right is dexter (or rectus in the sense of correct or virtuous). The English word right is a cognate of rectus. This is the origin of the D/L and R/S notations, and the employment of prefixes levo- and dextro- in common names. The prefix ar-, from the Latin recto (right), is applied to the right-handed version; es-, from the Latin sinister (left), to the left-handed molecule. Example: ketamine, arketamine, esketamine. Chirality centers The asymmetric atom is called a chirality center, a type of stereocenter. A chirality center is also called a chiral center or an asymmetric center. Some sources use the terms stereocenter, stereogenic center, stereogenic atom or stereogen to refer exclusively to a chirality center, while others use the terms more broadly to refer also to centers that result in diastereomers (stereoisomers that are not enantiomers). Compounds that contain exactly one (or any odd number) of asymmetric atoms are always chiral. However, compounds that contain an even number of asymmetric atoms sometimes lack chirality because they are arranged in mirror-symmetric pairs, and are known as meso compounds. For instance, meso tartaric acid (shown on the right) has two asymmetric carbon atoms, but it does not exhibit enantiomerism because there is a mirror symmetry plane. Conversely, there exist forms of chirality that do not require asymmetric atoms, such as axial, planar, and helical chirality. Even though a chiral molecule lacks reflection (Cs) and rotoreflection symmetries (S2n), it can have other molecular symmetries, and its symmetry is described by one of the chiral point groups: Cn, Dn, T, O, or I. For example, hydrogen peroxide is chiral and has C2 (two-fold rotational) symmetry. A common chiral case is the point group C1, meaning no symmetries, which is the case for lactic acid. Examples An example of such an enantiomer is the sedative thalidomide, which was sold in a number of countries around the world from 1957 until 1961. It was withdrawn from the market when it was found to cause birth defects. One enantiomer caused the desirable sedative effects, while the other, unavoidably present in equal quantities, caused birth defects. The herbicide mecoprop is a racemic mixture, with the (R)-(+)-enantiomer ("Mecoprop-P", "Duplosan KV") possessing the herbicidal activity. Another example is the antidepressant drugs escitalopram and citalopram. Citalopram is a racemate [1:1 mixture of (S)-citalopram and (R)-citalopram]; escitalopram [(S)-citalopram] is a pure enantiomer. The dosages for escitalopram are typically 1/2 of those for citalopram. Here, (S)-citalopram is called a chiral switch of Citalopram. Chiral drugs Enantiopure compounds consist of only one of the two enantiomers. Enantiopurity is of practical importance since such compositions have improved therapeutic efficacy. The switch from a racemic drug to an enantiopure drug is called a chiral switch. In many cases, the enantiomers have distinct effects. One case is that of Propoxyphene. The enantiomeric pair of propoxyphene is separately sold by Eli Lilly and company. One of the partners is dextropropoxyphene, an analgesic agent (Darvon) and the other is called levopropoxyphene, an effective antitussive (Novrad).  It is interesting to note that the trade names of the drugs, DARVON and NOVRAD, also reflect the chemical mirror-image relationship. In other cases, there may be no clinical benefit to the patient. In some jurisdictions, single-enantiomer drugs are separately patentable from the racemic mixture. It is possible that only one of the enantiomers is active. Or, it may be that both are active, in which case separating the mixture has no objective benefits, but extends the drug's patentability. Enantioselective preparations In the absence of an effective enantiomeric environment (precursor, chiral catalyst, or kinetic resolution), separation of a racemic mixture into its enantiomeric components is impossible, although certain racemic mixtures spontaneously crystallize in the form of a racemic conglomerate, in which crystals of the enantiomers are physically segregated and may be separated mechanically. However, most racemates form crystals containing both enantiomers in a 1:1 ratio. In his pioneering work, Louis Pasteur was able to isolate the isomers of sodium ammonium tartrate because the individual enantiomers crystallize separately from solution. To be sure, equal amounts of the enantiomorphic crystals are produced, but the two kinds of crystals can be separated with tweezers. This behavior is unusual. A less common method is by enantiomer self-disproportionation. The second strategy is asymmetric synthesis: the use of various techniques to prepare the desired compound in high enantiomeric excess. Techniques encompassed include the use of chiral starting materials (chiral pool synthesis), the use of chiral auxiliaries and chiral catalysts, and the application of asymmetric induction. The use of enzymes (biocatalysis) may also produce the desired compound. A third strategy is Enantioconvergent synthesis, the synthesis of one enantiomer from a racemic precursor, utilizing both enantiomers. By making use of a chiral catalyist, both enantiomers of the reactant result in a single enantiomer of product. Enantiomers may not be isolable if there is an accessible pathway for racemization (interconversion between enantiomorphs to yield a racemic mixture) at a given temperature and timescale. For example, amines with three distinct substituents are chiral, but with few exceptions (e.g. substituted N-chloroaziridines), they rapidly undergo "umbrella inversion" at room temperature, leading to racemization. If the racemization is fast enough, the molecule can often be treated as an achiral, averaged structure. Parity violation For all intents and purposes, each enantiomer in a pair has the same energy. However, theoretical physics predicts that due to parity violation of the weak nuclear force (the only force in nature that can "tell left from right"), there is actually a minute difference in energy between enantiomers (on the order of 10−12 eV or 10−10 kJ/mol or less) due to the weak neutral current mechanism. This difference in energy is far smaller than energy changes caused by even small changes in molecular conformation, and far too small to measure by current technology, and is therefore chemically inconsequential. In the sense used by particle physicists, the "true" enantiomer of a molecule, which has exactly the same mass-energy content as the original molecule, is a mirror-image that is also built from antimatter (antiprotons, antineutrons, and positrons). Throughout this article, "enantiomer" is used only in the chemical sense of compounds of ordinary matter that are not superposable on their mirror image. Quasi-enantiomers Quasi-enantiomers are molecular species that are not strictly enantiomers, but behave as if they are. In quasi-enantiomers majority of the molecule is reflected; however, an atom or group within the molecule is changed to a similar atom or group. Quasi-enantiomers can also be defined as molecules that have the potential to become enantiomers if an atom or group in the molecule is replaced. An example of quasi-enantiomers would (S)-bromobutane and (R)-iodobutane. Under normal conditions the enantiomers for (S)-bromobutane and (R)-iodobutane would (R)-bromobutane and (S)-iodobutane respectively. Quasi-enantiomers would also produce quasi-racemates, which are similar to normal racemates (see Racemic mixture) in that they form an equal mixture of quasi-enantiomers. Though not considered actual enantiomers, the naming convention for quasi-enantiomers also follows the same trend as enantiomers when looking at (R) and (S) configurations - which are considered from a geometrical basis (see Cahn–Ingold–Prelog priority rules). Quasi-enantiomers have applications in parallel kinetic resolution. See also Chiral switch Crystal system Enantiopure drug Atropisomer Chirotechnology Chirality (physics) Diastereomer Dynamic stereochemistry Epimer Homochirality Molecular symmetry Stereochemistry Stereocenter References External links chemwiki:stereoisomerism Stereochemistry Isomerism
Enantiomer
[ "Physics", "Chemistry" ]
2,599
[ "Stereochemistry", "Space", "Isomerism", "nan", "Spacetime" ]
176,320
https://en.wikipedia.org/wiki/Virtual%20community
A virtual community is a social network of individuals who connect through specific social media, potentially crossing geographical and political boundaries in order to pursue mutual interests or goals. Some of the most pervasive virtual communities are online communities operating under social networking services. Howard Rheingold discussed virtual communities in his book, The Virtual Community, published in 1993. The book's discussion ranges from Rheingold's adventures on The WELL, computer-mediated communication, social groups and information science. Technologies cited include Usenet, MUDs (Multi-User Dungeon) and their derivatives MUSHes and MOOs, Internet Relay Chat (IRC), chat rooms and electronic mailing lists. Rheingold also points out the potential benefits for personal psychological well-being, as well as for society at large, of belonging to a virtual community. At the same time, it showed that job engagement positively influences virtual communities of practice engagement. Virtual communities all encourage interaction, sometimes focusing around a particular interest or just to communicate. Some virtual communities do both. Community members are allowed to interact over a shared passion through various means: message boards, chat rooms, social networking World Wide Web sites, or virtual worlds. Members usually become attached to the community world, logging in and out on sites all day every day, which can certainly become an addiction. Introduction The traditional definition of a community is of geographically circumscribed entity (neighborhoods, villages, etc.). Virtual communities are usually dispersed geographically, and therefore are not communities under the original definition. Some online communities are linked geographically, and are known as community websites. However, if one considers communities to simply possess boundaries of some sort between their members and non-members, then a virtual community is certainly a community. Virtual communities resemble real life communities in the sense that they both provide support, information, friendship and acceptance between strangers. While in a virtual community space, users may be expected to feel a sense of belonging and a mutual attachment among the members that are in the space. One of the most influential part about virtual communities is the opportunity to communicate through several media platforms or networks. Now that virtual communities exists, this had leveraged out the things we once did prior to virtual communities, such as postal services, fax machines, and even speaking on the telephone. Early research into the existence of media-based communities was concerned with the nature of reality, whether communities actually could exist through the media, which could place virtual community research into the social sciences definition of ontology. In the seventeenth century, scholars associated with the Royal Society of London formed a community through the exchange of letters. "Community without propinquity", coined by urban planner Melvin Webber in 1963 and "community liberated", analyzed by Barry Wellman in 1979 began the modern era of thinking about non-local community. As well, Benedict Anderson's Imagined Communities in 1983, described how different technologies, such as national newspapers, contributed to the development of national and regional consciousness among early nation-states. Some authors that built their theories on Anderson's imagined communities have been critical of the concept, claiming that all communities are based on communication and that virtual/real dichotomy is disintegrating, making use of the word "virtual" problematic or even obsolete. Purpose Virtual communities are used for a variety of social and professional groups; interaction between community members vary from personal to purely formal. For example, an email distribution list could serve as a personal means of communicating with family and friends, and also formally to coordinate with coworkers. User experience testing to determine social codes User experience is the ultimate goal for the program or software used by an internet community, because user experience will determine the software's success. The software for social media pages or virtual communities is structured around the users' experience and designed specifically for online use. User experience testing is utilized to reveal something about the personal experience of the human being using a product or system. When it comes to testing user experience in a software interface, three main characteristics are needed: a user who is engaged, a user who is interacting with a product or interface, and defining the users' experience in ways that are and observable or measurable. User experience metrics are based on a reliability and repeatability, using a consistent set of measurements to result in comparable outcomes. User experience metrics are based on user retention, using a consistent set of measurements to collect data on user experience. The widespread use of the Internet and virtual communities by millions of diverse users for socializing is a phenomenon that raises new issues for researchers and developers. The vast number and diversity of individuals participating in virtual communities worldwide makes it a challenge to test usability across platforms to ensure the best overall user experience. Some well-established measures applied to the usability framework for online communities are speed of learning, productivity, user satisfaction, how much people remember using the software, and how many errors they make. The human computer interactions that are measured during a usability experience test focus on the individuals rather than their social interactions in the online community. The success of online communities depend on the integration of usability and social semiotics. Usability testing metrics can be used to determine social codes by evaluating a user's habits when interacting with a program. Social codes are established and reinforced by the regular repetition of behavioral patterns. People communicate their social identities or culture code through the work they do, the way they talk, the clothes they wear, their eating habits, domestic environments and possessions, and use of leisure time. Usability testing metrics can be used to determine social codes by evaluating a user's habits when interacting with a program. The information provided during a usability test can determine demographic factors and help define the semiotic social code. Dialogue and social interactions, support information design, navigation support, and accessibility are integral components specific to online communities. As virtual communities grow, so do the diversity of their users. However, the technologies are not made to be any more or less intuitive. Usability tests can ensure users are communicating effectively using social and semiotic codes while maintaining their social identities. Efficient communication requires a common set of signs in the minds of those seeking to communicate. As technologies evolve and mature, they tend to be used by an increasingly diverse set of users. This kind of increasing complexity and evolution of technology does no necessarily mean that the technologies are becoming easier to use. Usability testing in virtual communities can ensure users are communicating effectively through social and semiotic codes and maintenance of social realities and identities. Effects On health Recent studies have looked into development of health related communities and their impact on those already suffering health issues. These forms of social networks allow for open conversation between individuals who are going through similar experiences, whether themselves or in their family. Such sites have so grown in popularity that now many health care providers form groups for their patients by providing web areas where one may direct questions to doctors. These sites prove especially useful when related to rare medical conditions. People with rare or debilitating disorders may not be able to access support groups in their physical community, thus online communities act as primary means for such support. Online health communities can serve as supportive outlets as they facilitate connecting with others who truly understand the disease, as well as offer more practical support, such as receiving help in adjusting to life with the disease. Each patient on online health communities are on there for different reasons, as some may need quick answers to questions they have, or someone to talk to.Involvement in social communities of similar health interests has created a means for patients to develop a better understanding and behavior towards treatment and health practices. Some of these users could have very serious life-threatening issues which these personal contexts could become very helpful to these users, as the issues are very complex. Patients increasingly use such outlets, as this is providing personalized and emotional support and information, that will help them and have a better experience. The extent to which these practices have effects on health are still being studied. Studies on health networks have mostly been conducted on groups which typically suffer the most from extreme forms of diseases, for example cancer patients, HIV patients, or patients with other life-threatening diseases. It is general knowledge that one participates in online communities to interact with society and develop relationships. Individuals who suffer from rare or severe illnesses are unable to meet physically because of distance or because it could be a risk to their health to leave a secure environment. Thus, they have turned to the internet. Some studies have indicated that virtual communities can provide valuable benefits to their users. Online health-focused communities were shown to offer a unique form of emotional support that differed from event-based realities and informational support networks. Growing amounts of presented material show how online communities affect the health of their users. Apparently the creation of health communities has a positive impact on those who are ill or in need of medical information. On civic participation It was found that young individuals are more bored with politics and history topics, and instead are more interested in celebrity dramas and topics. Young individuals claim that "voicing what you feel" does not mean "being heard", so they feel the need to not participate in these engagements, as they believe they are not being listened to anyway. Over the years, things have changed, as new forms of civic engagement and citizenship have emerged from the rise of social networking sites. Networking sites act as a medium for expression and discourse about issues in specific user communities. Online content-sharing sites have made it easy for youth as well as others to not only express themselves and their ideas through digital media, but also connect with large networked communities. Within these spaces, young people are pushing the boundaries of traditional forms of engagement such as voting and joining political organizations and creating their own ways to discuss, connect, and act in their communities. Civic engagement through online volunteering has shown to have positive effects on personal satisfaction and development. Some 84 percent of online volunteers found that their online volunteering experience had contributed to their personal development and learning. On communication In his book The Wealth of Networks from 2006, Yochai Benkler suggests that virtual communities would "come to represent a new form of human communal existence, providing new scope for building a shared experience of human interaction". Although Benkler's prediction has not become entirely true, clearly communications and social relations are extremely complex within a virtual community. The two main effects that can be seen according to Benkler are a "thickening of preexisting relations with friends, family and neighbours" and the beginnings of the "emergence of greater scope for limited-purpose, loose relationships". Despite being acknowledged as "loose" relationships, Benkler argues that they remain meaningful. Previous concerns about the effects of Internet use on community and family fell into two categories: 1) sustained, intimate human relations "are critical to well-functioning human beings as a matter of psychological need" and 2) people with "social capital" are better off than those who lack it. It leads to better results in terms of political participation. However, Benkler argues that unless Internet connections actually displace direct, unmediated, human contact, there is no basis to think that using the Internet will lead to a decline in those nourishing connections we need psychologically, or in the useful connections we make socially. Benkler continues to suggest that the nature of an individual changes over time, based on social practices and expectations. There is a shift from individuals who depend upon locally embedded, unmediated and stable social relationships to networked individuals who are more dependent upon their own combination of strong and weak ties across boundaries and weave their own fluid relationships. Manuel Castells calls this the "networked society". On identity In 1997, MCI Communications released the "Anthem" advertisement, heralding the internet as a utopia without age, race, or gender. Lisa Nakamura argues in chapter 16 of her 2002 book After/image of identity: Gender, Technology, and Identity Politics, that technology gives us iterations of our age, race and gender in virtual spaces, as opposed to them being fully extinguished. Nakamura uses a metaphor of "after-images" to describe the cultural phenomenon of expressing identity on the internet. The idea is that any performance of identity on the internet is simultaneously present and past-tense, "posthuman and projectionary", due to its immortality. Sherry Turkle, professor of Social Studies of Science and Technology at MIT, believes the internet is a place where actions of discrimination are less likely to occur. In her 1995 book Life on the Screen: Identity in the Age of the Internet, she argues that discrimination is easier in reality as it is easier to identify as face value, what is contrary to one's norm. The internet allows for a more fluid expression of identity and thus people become more accepting of inconsistent personae within themselves and others. For these reasons, Turkle argues users existing in online spaces are less compelled to judge or compare themselves to their peers, allowing people in virtual settings an opportunity to gain a greater capacity for acknowledging diversity. Nakamura argues against this view, coining the term identity tourism in her 1999 article "Race In/For Cyberspace: Identity Tourism and Racial Passing on the Internet". Identity tourism, in the context of cyberspace, is a term used to the describe the phenomenon of users donning and doffing other-race and other-gender personae. Nakamura finds that performed behavior from these identity tourists often perpetuate stereotypes. In the 1998 book Communities in Cyberspace, authors Marc A. Smith and Peter Kollock, perceives the interactions with strangers are based upon with whom we are speaking or interacting with. People use everything from clothes, voice, body language, gestures, and power to identify others, which plays a role with how they will speak or interact with them. Smith and Kollock believes that online interactions breaks away of all of the face-to-face gestures and signs that people tend to show in front of one another. Although this is difficult to do online, it also provides space to play with one's identity. Gender The gaming community is extremely vast and accessible to a wide variety of people, However, there are negative effects on the relationships "gamers" have with the medium when expressing identity of gender. Adrienne Shaw notes in her 2012 article "Do you identify as a gamer? Gender, race, sexuality, and gamer identity", that gender, perhaps subconsciously, plays a large role in identifying oneself as a "gamer". According to Lisa Nakamura, representation in video games has become a problem, as the minority of players from different backgrounds who are not just the stereotyped white teen male gamer are not represented. Types Internet-based The explosive diffusion of the Internet since the mid-1990s fostered the proliferation of virtual communities in the form of social networking services and online communities. Virtual communities may synthesize Web 2.0 technologies with the community, and therefore have been described as Community 2.0, although strong community bonds have been forged online since the early 1970s on timeshare systems like PLATO and later on Usenet. Online communities depend upon social interaction and exchange between users online. This interaction emphasizes the reciprocity element of the unwritten social contract between community members. Internet message boards An online message board is a forum where people can discuss thoughts or ideas on various topics or simply express an idea. Users may choose which thread, or board of discussion, they would like to read or contribute to. A user will start a discussion by making a post. Other users who choose to respond can follow the discussion by adding their own posts to that thread at any time. Unlike in spoken conversations, message boards do not usually have instantaneous responses; users actively go to the website to check for responses. Anyone can register to participate in an online message board. People can choose to participate in the virtual community, even if or when they choose not to contribute their thoughts and ideas. Unlike chat rooms, at least in practice, message boards can accommodate an almost infinite number of users. Internet users' urges to talk to and reach out to strangers online is unlike those in real-life encounters where people are hesitant and often unwilling to step in to help strangers. Studies have shown that people are more likely to intervene when they are the only one in a situation. With Internet message boards, users at their computers are alone, which might contribute to their willingness to reach out. Another possible explanation is that people can withdraw from a situation much more easily online than off. They can simply click exit or log off, whereas they would have to find a physical exit and deal with the repercussions of trying to leave a situation in real life. The lack of status that is presented with an online identity also might encourage people, because, if one chooses to keep it private, there is no associated label of gender, age, ethnicity or lifestyle. Online chat rooms Shortly after the rise of interest in message boards and forums, people started to want a way of communicating with their "communities" in real time. The downside to message boards was that people would have to wait until another user replied to their posting, which, with people all around the world in different time frames, could take a while. The development of online chat rooms allowed people to talk to whoever was online at the same time they were. This way, messages were sent and online users could immediately respond. The original development by CompuServe CB hosted forty channels in which users could talk to one another in real time. The idea of forty different channels led to the idea of chat rooms that were specific to different topics. Users could choose to join an already existent chat room they found interesting, or start a new "room" if they found nothing to their liking. Real-time chatting was also brought into virtual games, where people could play against one another and also talk to one another through text. Now, chat rooms can be found on all sorts of topics, so that people can talk with others who share similar interests. Chat rooms are now provided by Internet Relay Chat (IRC) and other individual websites such as Yahoo, MSN, and AOL. Chat room users communicate through text-based messaging. Most chat room providers are similar and include an input box, a message window, and a participant list. The input box is where users can type their text-based message to be sent to the providing server. The server will then transmit the message to the computers of anyone in the chat room so that it can be displayed in the message window. The message window allows the conversation to be tracked and usually places a time stamp once the message is posted. There is usually a list of the users who are currently in the room, so that people can see who is in their virtual community. Users can communicate as if they are speaking to one another in real life. This "simulated reality" attribute makes it easy for users to form a virtual community, because chat rooms allow users to get to know one another as if they were meeting in real life. The individual "room" feature also makes it more likely that the people within a chat room share a similar interest; an interest that allows them to bond with one another and be willing to form a friendship. Virtual worlds Virtual worlds are the most interactive of all virtual community forms. In this type of virtual community, people are connected by living as an avatar in a computer-based world. Users create their own avatar character (from choosing the avatar's outfits to designing the avatar's house) and control their character's life and interactions with other characters in the 3D virtual world. It is similar to a computer game; however, there is no objective for the players. A virtual world simply gives users the opportunity to build and operate a fantasy life in the virtual realm. Characters within the world can talk to one another and have almost the same interactions people would have in reality. For example, characters can socialize with one another and hold intimate relationships online. This type of virtual community allows for people to not only hold conversations with others in real time, but also to engage and interact with others. The avatars that users create are like humans. Users can choose to make avatars like themselves, or take on an entirely different personality than them. When characters interact with other characters, they can get to know one another through text-based talking and virtual experience (such as having avatars go on a date in the virtual world). A virtual community chat room may give real-time conversations, but people can only talk to one another. In a virtual world, characters can do activities together, just like friends could do in reality. Communities in virtual worlds are most similar to real-life communities because the characters are physically in the same place, even if the users who are operating the characters are not. Second Life is one of the most popular virtual worlds on the Internet. Whyville offers an alternative for younger audiences where safety and privacy are a concern. In Whyville, players use the virtual world's simulation aspect to experiment and learn about various phenomena. Another use for virtual worlds has been in business communications. Benefits from virtual world technology such as photo realistic avatars and positional sound create an atmosphere for participants that provides a less fatiguing sense of presence. Enterprise controls that allow the meeting host to dictate the permissions of the attendees such as who can speak, or who can move about allow the host to control the meeting environment. Zoom, is a popular platform that has grown over the COVID-19 pandemic. Where those who host meetings on this platform, can dictate who can or cannot speak, by muting or unmuting them, along with who is able to join. Several companies are creating business based virtual worlds including Second Life. These business based worlds have stricter controls and allow functionality such as muting individual participants, desktop sharing, or access lists to provide a highly interactive and controlled virtual world to a specific business or group. Business based virtual worlds also may provide various enterprise features such as Single Sign on with third party providers, or Content Encryption. Social network services Social networking services are the most prominent type of virtual community. They are either a website or software platform that focuses on creating and maintaining relationships. Facebook, Twitter, and Instagram are all virtual communities. With these sites, one often creates a profile or account, and adds friends or follow friends. This allows people to connect and look for support using the social networking service as a gathering place. These websites often allow for people to keep up to date with their friends and acquaintances' activities without making much of an effort. On several of these sites you may be able to video chat, with several people at once, making the connections feel more like you are together. On Facebook, for example, one can upload photos and videos, chat, make friends, reconnect with old ones, and join groups or causes. Specialized information communities Participatory culture plays a large role in online and virtual communities. In participatory culture, users feel that their contributions are important and that by contributing, they are forming meaningful connections with other users. The differences between being a producer of content on the website and being a consumer on the website become blurred and overlap. According to Henry Jenkins, "Members believe their contributions matter and feel some degree of social connection with one another "(Jenkins, et al. 2005). The exchange and consumption of information requires a degree of "digital literacy", such that users are able to "archive, annotate, appropriate, transform and recirculate media content" (Jenkins). Specialized information communities centralizes a specific group of users who are all interested in the same topic. For example, TasteofHome.com, the website of the magazine Taste of Home, is a specialized information community that focuses on baking and cooking. The users contribute consumer information relating to their hobby and additionally participate in further specialized groups and forums. Specialized Information Communities are a place where people with similar interests can discuss and share their experiences and interests. Howard Rheingold's study Howard Rheingold's Virtual Community could be compared with Mark Granovetter's ground-breaking "strength of weak ties" article published twenty years earlier in the American Journal of Sociology. Rheingold translated, practiced and published Granovetter's conjectures about strong and weak ties in the online world. His comment on the first page even illustrates the social networks in the virtual society: "My seven year old daughter knows that her father congregates with a family of invisible friends who seem to gather in his computer. Sometimes he talks to them, even if nobody else can see them. And she knows that these invisible friends sometimes show up in the flesh, materializing from the next block or the other side of the world" (page 1). Indeed, in his revised version of Virtual Community, Rheingold goes so far to say that had he read Barry Wellman's work earlier, he would have called his book "online social networks". Rheingold's definition contains the terms "social aggregation and personal relationships" (page 3). Lipnack and Stamps (1997) and Mowshowitz (1997) point out how virtual communities can work across space, time and organizational boundaries; Lipnack and Stamps (1997) mention a common purpose; and Lee, Eom, Jung and Kim (2004) introduce "desocialization" which means that there is less frequent interaction with humans in traditional settings, e.g. an increase in virtual socialization. Calhoun (1991) presents a dystopia argument, asserting the impersonality of virtual networks. He argues that IT has a negative influence on offline interaction between individuals because virtual life takes over our lives. He believes that it also creates different personalities in people which can cause frictions in offline and online communities and groups and in personal contacts. (Wellman & Haythornthwaite, 2002). Recently, Mitch Parsell (2008) has suggested that virtual communities, particularly those that leverage Web 2.0 resources, can be pernicious by leading to attitude polarization, increased prejudices and enabling sick individuals to deliberately indulge in their diseases. Advantages of Internet communities Internet communities offer the advantage of instant information exchange that is not possible in a real-life community. This interaction allows people to engage in many activities from their home, such as: shopping, paying bills, and searching for specific information. Users of online communities also have access to thousands of specific discussion groups where they can form specialized relationships and access information in such categories as: politics, technical assistance, social activities, health (see above) and recreational pleasures. Virtual communities provide an ideal medium for these types of relationships because information can easily be posted and response times can be very fast. Another benefit is that these types of communities can give users a feeling of membership and belonging. Users can give and receive support, and it is simple and cheap to use. Economically, virtual communities can be commercially successful, making money through membership fees, subscriptions, usage fees, and advertising commission. Consumers generally feel very comfortable making transactions online provided that the seller has a good reputation throughout the community. Virtual communities also provide the advantage of disintermediation in commercial transactions, which eliminates vendors and connects buyers directly to suppliers. Disintermediation eliminates pricey mark-ups and allows for a more direct line of contact between the consumer and the manufacturer. Disadvantages of Internet communities While instant communication means fast access, it also means that information is posted without being reviewed for correctness. It is difficult to choose reliable sources because there is no editor who reviews each post and makes sure it is up to a certain degree of quality. In theory, online identities can be kept anonymous which enables people to use the virtual community for fantasy role playing as in the case of Second Lifes use of avatars. Some professionals urge caution with users who use online communities because predators also frequent these communities looking for victims who are vulnerable to online identity theft or online predators. There are also issues surrounding bullying on internet communities. With users not having to show their face, people may use threatening and discriminating acts towards other people because they feel that they would not face any consequences. There are standing issues with gender and race on the online community as well, where only the majority is represented on the screen, and those of different background and genders are underrepresented. See also Battleboarding Clan (video games) Commons-based peer production Community of practice Comparison of online dating services Cybersectarianism Dating search engine Digital altruism Dunbar's number Global village Human-based genetic algorithm Immersion (virtual reality) Internet activism Internet influences on communities Internet think tanks Learner-generated context List of social networking services List of virtual communities Mass collaboration Motivations of Wikipedia contributors Music community Network of practice Online community Online community manager Online deliberation Online ethnography Online research community Personal network Professional network service Social media Social web Support groups The Virtual Community Tribe (internet) Video game culture Virtual airline (hobby) Virtual community of practice Virtual volunteering Virtual world Metaverse Web of trust Notes and references Bibliography (interview) ) The author has made available an The author has made available an online copy Portions available online as: Journal of Computer Mediated Communication, 2 Translated into German as Virtual reality Community building Social information processing Social software Internet Community
Virtual community
[ "Technology" ]
5,976
[ "Mobile content", "Internet", "Transport systems", "Social software" ]
176,332
https://en.wikipedia.org/wiki/Clownfish
Clownfish or anemonefish are fishes from the subfamily Amphiprioninae in the family Pomacentridae. Thirty species of clownfish are recognized: one in the genus Premnas, while the remaining are in the genus Amphiprion. In the wild, they all form symbiotic mutualisms with sea anemones. Depending on the species, anemonefish are overall yellow, orange, or a reddish or blackish color, and many show white bars or patches. The largest can reach a length of , while the smallest barely achieve . Distribution and habitat Anemonefish are endemic to the warmer waters of the Indian Ocean, including the Red Sea, and Pacific Ocean, the Great Barrier Reef, Hawaii, USA, North America, Southeast Asia, Japan, and the Indo-Malaysian region. While most species have restricted distributions, others are widespread. Anemonefish typically live at the bottom of shallow seas in sheltered reefs or in shallow lagoons. No anemonefish are found in the Atlantic. Diet Anemonefish are omnivorous and can feed on undigested food from their host anemones, and the fecal matter from the anemonefish provides nutrients to the sea anemone. Anemonefish primarily feed on small zooplankton from the water column, such as copepods and tunicate larvae, with a small portion of their diet coming from algae, with the exception of Amphiprion perideraion, which primarily feeds on algae. Symbiosis and mutualism Anemonefish and sea anemones have a symbiotic, mutualistic relationship, each providing many benefits to the other. The individual species are generally highly host specific. The sea anemone protects the anemonefish from predators, as well as providing food through the scraps left from the anemone's meals and occasional dead anemone tentacles, and functions as a safe nest site. In return, the anemonefish defends the anemone from its predators and parasites. The anemone also picks up nutrients from the anemonefish's excrement. The nitrogen excreted from anemonefish increases the number of algae incorporated into the tissue of their hosts, which aids the anemone in tissue growth and regeneration. The activity of the anemonefish results in greater water circulation around the sea anemone, and it has been suggested that their bright coloring might lure small fish to the anemone, which then catches them. Studies on anemonefish have found that they alter the flow of water around sea anemone tentacles by certain behaviors and movements such as "wedging" and "switching". Aeration of the host anemone tentacles allows for benefits to the metabolism of both partners, mainly by increasing anemone body size and both anemonefish and anemone respiration. Bleaching of the host anemone can occur when warm temperatures cause a reduction in algal symbionts within the anemone. Bleaching of the host can cause a short-term increase in the metabolic rate of resident anemonefish, probably as a result of acute stress. Over time, however, there appears to be a down-regulation of metabolism and a reduced growth rate for fish associated with bleached anemones. These effects may stem from reduced food availability (e.g. anemone waste products, symbiotic algae) for the anemonefish. Several theories are given about how they can survive the sea anemone venom: The mucus coating of the fish may be based on sugars rather than proteins. This would mean that anemones fail to recognize the fish as a potential food source and do not fire their nematocysts, or sting organelles. The coevolution of certain species of anemonefish with specific anemone host species may have allowed the fish to evolve an immunity to the nematocysts and toxins of their hosts. Amphiprion percula may develop resistance to the toxin from Heteractis magnifica, but it is not totally protected since it was shown experimentally to die when its skin, devoid of mucus, was exposed to the nematocysts of its host. Anemonefish are the best known example of fish that are able to live among the venomous sea anemone tentacles, but several others occur, including juvenile threespot dascyllus, certain cardinalfish (such as Banggai cardinalfish), incognito (or anemone) goby, and juvenile painted greenling. Reproduction In a group of anemonefish, a strict dominance hierarchy exists. The largest and most aggressive female is found at the top. Only two anemonefish, a male and a female, in a group reproduce – through external fertilization. Anemonefish are protandrous sequential hermaphrodites, meaning they develop into males first, and when they mature, they become females. If the female anemonefish is removed from the group, such as by death, one of the largest and most dominant males becomes a female. The remaining males move up a rank in the hierarchy. Clownfish live in a hierarchy, like hyenas, except smaller and based on size not sex, and order of joining/birth. Anemonefish lay eggs on any flat surface close to their host anemones. In the wild, anemonefish spawn around the time of the full moon. Depending on the species, they can lay hundreds or thousands of eggs. The male parent guards the eggs until they hatch about 6–10 days later, typically two hours after dusk. Parental investment Anemonefish colonies usually consist of the reproductive male and female and a few male juveniles, which help tend the colony. Although multiple males cohabit an environment with a single female, polygamy does not occur and only the adult pair exhibits reproductive behavior. However, if the female dies, the social hierarchy shifts with the breeding male exhibiting protandrous sex reversal to become the breeding female. The largest juvenile then becomes the new breeding male after a period of rapid growth. The existence of protandry in anemonefish may rest on the case that nonbreeders modulate their phenotype in a way that causes breeders to tolerate them. This strategy prevents conflict by reducing competition between males for one female. For example, by purposefully modifying their growth rate to remain small and submissive, the juveniles in a colony present no threat to the fitness of the adult male, thereby protecting themselves from being evicted by the dominant fish.The reproductive cycle of anemonefish is often correlated with the lunar cycle. Rates of spawning for anemonefish peak around the first and third quarters of the moon. The timing of this spawn means that the eggs hatch around the full moon or new moon periods. One explanation for this lunar clock is that spring tides produce the highest tides during full or new moons. Nocturnal hatching during high tide may reduce predation by allowing for a greater capacity for escape. Namely, the stronger currents and greater water volume during high tide protect the hatchlings by effectively sweeping them to safety. Before spawning, anemonefish exhibit increased rates of anemone and substrate biting, which help prepare and clean the nest for the spawn. Before making the clutch, the parents often clear an oval-shaped clutch varying in diameter for the spawn. Fecundity, or reproductive rate, of the females, usually ranges from 600 to 1,500 eggs depending on her size. In contrast to most animal species, the female only occasionally takes responsibility for the eggs, with males expending most of the time and effort. Male anemonefish care for their eggs by fanning and guarding them for 6 to 10 days until they hatch. In general, eggs develop more rapidly in a clutch when males fan properly, and fanning represents a crucial mechanism for successfully developing eggs. This suggests that males can control the success of hatching an egg clutch by investing different amounts of time and energy toward the eggs. For example, a male could choose to fan less in times of scarcity or fan more in times of abundance. Furthermore, males display increased alertness when guarding more valuable broods, or eggs in which paternity is guaranteed. Females, though, display generally less preference for parental behavior than males. All these suggest that males have increased parental investment towards eggs compared to females. Clownfish hatchlings undergo development after hatching in regards to both their body size and fins. If maintained at the demanded thermal regulation, clownfish undergo proper development of their fins. Clownfish follow the ensuing order in their fin development "Pectorals < caudal < dorsal = anal < pelvic". The early larval stage is crucial to ensure a healthy progression of growth. Taxonomy Historically, anemonefish have been identified by morphological features and color pattern in the field, while in a laboratory, other features such as scalation of the head, tooth shape, and body proportions are used. These features have been used to group species into six complexes: percula, tomato, skunk, clarkii, saddleback, and maroon. As can be seen from the gallery, each of the fish in these complexes has a similar appearance. Genetic analysis has shown that these complexes are not monophyletic groups, particularly the 11 species in the A. clarkii group, where only A. clarkii and A. tricintus are in the same clade, with six species,A . allardi A. bicinctus, A. chagosensis, A. chrosgaster, A. fuscocaudatus, A. latifasciatus, and A. omanensis being in an Indian clade, A. chrysopterus having monospecific lineage, and A. akindynos in the Australian clade with A. mccullochi. Other significant differences are that A. latezonatus also has monospecific lineage, and A. nigripes is in the Indian clade rather than with A. akallopisos, the skunk anemonefish. A. latezonatus is more closely related to A. percula and Premnas biaculeatus than to the saddleback fish with which it was previously grouped. Obligate mutualism was thought to be the key innovation that allowed anemonefish to radiate rapidly, with rapid and convergent morphological changes correlated with the ecological niches offered by the host anemones. The complexity of mitochondrial DNA structure shown by genetic analysis of the Australian clade suggested evolutionary connectivity among samples of A. akindynos and A. mccullochi that the authors theorize was the result of historical hybridization and introgression in the evolutionary past. The two evolutionary groups had individuals of both species detected, thus the species lacked reciprocal monophyly. No shared haplotypes were found between species. Species Morphological diversity by complex In the aquarium Anemonefish make up approximately 43% of the global marine ornamental trade, and approximately 25% of the global trade comes from fish bred in captivity, while the majority is captured from the wild, accounting for decreased densities in exploited areas. Public aquaria and captive-breeding programs are essential to sustain their trade as marine ornamentals, and has recently become economically feasible. It is one of a handful of marine ornamentals whose complete lifecycle has been in closed captivity. Members of some anemonefish species, such as the maroon clownfish, become aggressive in captivity; others, like the false percula clownfish, can be kept successfully with other individuals of the same species. When a sea anemone is not available in an aquarium, the anemonefish may settle in some varieties of soft corals, or large polyp stony corals. Once an anemone or coral has been adopted, the anemonefish will defend it. Anemonefish, however, are not obligately tied to hosts, and can survive alone in captivity. Clownfish sold from captivity make up a very small account (10%) of the total trade of these fishes. Designer Clownfish, scientifically named A. ocellaris are much costlier and obtaining them has disrupted their coral reefs. Their attractive allure, color, and patterning have made them out to be an attractive target in wild trading. In popular culture In Disney Pixar's 2003 film Finding Nemo and its 2016 sequel Finding Dory main characters Nemo, his father Marlin, and his mother Coral are clownfish from the species A. ocellaris. The popularity of anemonefish for aquaria increased following the film's release; it is the first film associated with an increase in the numbers of those captured in the wild. Notes References Further reading External links Photo Gallery of Amphiprion ocellaris and their eggs Monterey Bay Aquarium: Video and information Clown Fish underwater photography gallery Pomacentridae Symbiosis Articles containing video clips Ray-finned fish subfamilies Fish of Saudi Arabia
Clownfish
[ "Biology" ]
2,705
[ "Biological interactions", "Behavior", "Symbiosis" ]
176,349
https://en.wikipedia.org/wiki/Tom%20Van%20Flandern
Thomas Charles Van Flandern (June 26, 1940 – January 9, 2009) was an American astronomer and author who specialized in celestial mechanics. Van Flandern had a career as a professional scientist but was noted as an outspoken proponent of certain fringe views in astronomy, physics, and extraterrestrial life. He also published the non-mainstream Meta Research Bulletin. Biography Tom Van Flandern was the first child of Robert F. Van Flandern, a police officer, and Anna Mary Haley. His father left the family when Tom was 5. His mother died when he was 16; he and his siblings then lived with their grandmother, Margery Jobe, until he went to college. He graduated from Saint Ignatius High School in Cleveland. While there, he helped start the Cleveland branch of Operation Moonwatch, an amateur science program initiated by the Smithsonian Astrophysical Observatory to track satellites. He also helped found a Moonwatchers team while studying at Xavier University; this team broke a tracking record in 1961. Van Flandern graduated from Xavier University with a B.S. in mathematics (cum laude) in 1962 and was awarded a teaching fellowship at Georgetown University. He attended Yale University on a scholarship sponsored by the U.S. Naval Observatory (USNO), joining USNO in 1963. In 1969, he received a Ph.D. in astronomy from Yale after completing his dissertation on lunar occultations. Van Flandern worked at the USNO until 1983, first becoming Chief of the Research Branch and later becoming Chief of the Celestial Mechanics Branch of the Nautical Almanac Office. His espousal of highly non-mainstream beliefs, particularly the exploded planet hypothesis, eventually led to his separation from the USNO. He later said, "This forced me to the 'fringes,' areas of astronomy not accepted as credible by experts of the field". Following his separation from the USNO, Van Flandern started a business organizing eclipse-viewing expeditions and promoting his non-mainstream views in a newsletter and website. Shortly after he died in 2009, the asteroid 52266 Van Flandern was named in his honor because of his prediction and analysis of lunar occultations at the U.S. Naval Observatory and publications of papers on the dynamics of binary minor planets. He married Barbara Ann Weber (1942-2018) in 1963 in Kentucky, and they had three sons, Michael, Brian, and Kevin, and a daughter, Connie. The couple moved to Sequim, Washington, from the East Coast in 2005 to be closer to their children and grandchildren. Tom Van Flandern died of colon cancer in Seattle, Washington. Mainstream scientific work During the mid-1970s, Van Flandern believed that lunar observations gave evidence of variation in Newton's gravitational constant (G), consistent with a speculative idea that had been put forward by Paul Dirac. In 1974, his essay "A Determination of the Rate of Change of G" was awarded second place by the Gravity Research Foundation. However, in later years, with new data available, Van Flandern himself admitted his findings were flawed and contradicted by more accurate findings based on radio measurements with the Viking landers. Van Flandern and Henry Fliegel developed a compact algorithm to calculate a Julian date from a Gregorian date that would fit on a single IBM card. They described this in a letter to the editor of a computing magazine in 1968. This was available for use in business applications. With Kenneth Pulkkinen, he published "Low precision formulae for planetary positions" in the Astrophysical Journal Supplement in 1979. The paper set a record for the number of reprints requested from that journal. Following claims by David Dunham in 1978 to have detected satellites for some asteroids (notably 532 Herculina) by examining the light patterns during stellar occultations, Van Flandern and others began to report similar observations. His non-mainstream 1978 prediction that some asteroids have natural satellites, which was almost universally rejected at the time, was later proven correct when the Galileo spacecraft photographed Dactyl, a satellite of 243 Ida, during its flyby in 1993. Non-mainstream science and beliefs Van Flandern described in his 1993 book Dark Matter, Missing Planets, New Comets how he had become increasingly dissatisfied with the mainstream views of science by the early 1980s. He wrote: "Events in my life caused me to start questioning my goals and the correctness of everything I had learned. In matters of religion, medicine, biology, physics, and other fields, I came to discover that reality differed seriously from what I had been taught." In his book, on blogs, lectures, newsletters, and websites, Van Flandern focused on problems in cosmology and physics. He alleged that when experimental evidence is incompatible with mainstream scientific theories, mainstream scientists refuse to acknowledge this to avoid jeopardizing their funding. Exploding planets In 1976, while Van Flandern worked for the USNO, he began to promote the belief that major planets sometimes explode. Van Flandern also speculated that the origin of the human species may well have been on the planet Mars, which he believed was once a moon of a now-exploded "Planet V". Le Sage's theory of gravitation and the speed of gravity Van Flandern supported Georges-Louis Le Sage's theory of gravitation, according to which gravity is the result of a flux of invisible "ultra-mundane corpuscles" impinging on all objects from all directions at superluminal speeds. He gave public lectures in which he claimed that these particles could be used as a limitless source of free energy and to provide superluminal propulsion for spacecraft. In 1998 Van Flandern wrote a paper asserting that astronomical observations imply that gravity propagates at least twenty billion times faster than light, or even infinitely fast. Gerald E. Marsh, Charles Nissim-Sabat and Steve Carlip demonstrated that Van Flandern's argument was fallacious. Face on Mars Van Flandern was a prominent advocate of the belief that certain geological features seen on Mars, especially the "face at Cydonia", are not of natural origin but were produced by intelligent extraterrestrial life, probably the inhabitants of a major planet once located where the asteroid belt presently exists, and which Van Flandern believed had exploded 3.2 million years ago. The claimed artificiality of the "face" was also the topic of a chapter of his 1993 book. Rejection of Big Bang cosmology Van Flandern was a vocal opponent of the Big Bang model in cosmology and believed in a static universe instead. In 2008, he organized the "Crisis in Cosmology"; a conference of individuals who opposed the Big Bang cosmological models. References External links Archived: Biography at Meta Research site Archived: Salon story about relativity dissidents including Van Flandern (archived) 1940 births 2009 deaths American astronomers Pseudoscientific physicists Relativity critics Yale University alumni
Tom Van Flandern
[ "Physics" ]
1,424
[ "Relativity critics", "Theory of relativity" ]
176,354
https://en.wikipedia.org/wiki/Mineral%20wool
Mineral wool is any fibrous material formed by spinning or drawing molten mineral or rock materials such as slag and ceramics. Applications of mineral wool include thermal insulation (as both structural insulation and pipe insulation), filtration, soundproofing, and hydroponic growth medium. Naming Mineral wool is also known as mineral cotton, mineral fiber, man-made mineral fiber (MMMF), and man-made vitreous fiber (MMVF). Specific mineral wool products are stone wool and slag wool. Europe also includes glass wool which, together with ceramic fiber, are entirely artificial fibers that can be made into different shapes and are spiky to touch. History Slag wool was first made in 1840 in Wales by Edward Parry, "but no effort appears to have been made to confine the wool after production; consequently it floated about the works with the slightest breeze, and became so injurious to the men that the process had to be abandoned". A method of making mineral wool was patented in the United States in 1870 by John Player and first produced commercially in 1871 at Georgsmarienhütte in Osnabrück Germany. The process involved blowing a strong stream of air across a falling flow of liquid iron slag which was similar to the natural occurrence of fine strands of volcanic slag from Kilauea called Pele's hair created by strong winds blowing apart the slag during an eruption. According to a mineral wool manufacturer, the first mineral wool intended for high-temperature applications was invented in the United States in 1942 but was not commercially viable until approximately 1953. More forms of mineral wool became available in the 1970s and 1980s. High-temperature mineral wool High-temperature mineral wool is a type of mineral wool created for use as high-temperature insulation and generally defined as being resistant to temperatures above 1,000 °C. This type of insulation is usually used in industrial furnaces and foundries. Because high-temperature mineral wool is costly to produce and has limited availability, it is almost exclusively used in high-temperature industrial applications and processes. Definitions Classification temperature is the temperature at which a certain amount of linear contraction (usually two to four percent) is not exceeded after a 24-hour heat treatment in an electrically heated laboratory oven in a neutral atmosphere. Depending on the type of product, the value may not exceed two percent for boards and shaped products and four percent for mats and papers. The classification temperature is specified in 50 °C steps starting at 850 °C and up to 1600 °C. The classification temperature does not mean that the product can be used continuously at this temperature. In the field, the continuous application temperature of amorphous high-temperature mineral wool (AES and ASW) is typically 100 °C to 150 °C below the classification temperature. Products made of polycrystalline wool can generally be used up to the classification temperature. Types There are several types of high-temperature mineral wool made from different types of minerals. The mineral chosen results in different material properties and classification temperatures. Alkaline earth silicate wool (AES wool) AES wool consists of amorphous glass fibers that are produced by melting a combination of calcium oxide (CaO−), magnesium oxide (MgO−), and silicon dioxide (SiO2). Products made from AES wool are generally used in equipment that continuously operates and in domestic appliances. Some formulations of AES wool are bio-soluble, meaning they dissolve in bodily fluids within a few weeks and are quickly cleared from the lungs. Alumino silicate wool (ASW) Alumino silicate wool, also known as refractory ceramic fiber (RCF), consists of amorphous fibers produced by melting a combination of aluminum oxide (Al2O3) and silicon dioxide (SiO2), usually in a weight ratio 50:50 (see also VDI 3469 Parts 1 and 5, as well as TRGS 521). Products made of alumino silicate wool are generally used at application temperatures of greater than 900 °C for equipment that operates intermittently and in critical application conditions (see Technical Rules TRGS 619). Polycrystalline wool (PCW) Polycrystalline wool consists of fibers that contain aluminum oxide (Al2O3) at greater than 70 percent of the total materials and is produced by sol–gel method from aqueous spinning solutions. The water-soluble green fibers obtained as a precursor are crystallized by means of heat treatment. Polycrystalline wool is generally used at application temperatures greater than 1300 °C and in critical chemical and physical application conditions. Kaowool Kaowool is a type of high-temperature mineral wool made from the mineral kaolin. It was one of the first types of high-temperature mineral wool invented and has been used into the 21st century. It can withstand temperatures close to . Manufacture Stone wool is a furnace product of molten rock at a temperature of about 1600 °C through which a stream of air or steam is blown. More advanced production techniques are based on spinning molten rock in high-speed spinning heads somewhat like the process used to produce cotton candy. The final product is a mass of fine, intertwined fibers with a typical diameter of 2 to 6 micrometers. Mineral wool may contain a binder, often a terpolymer, and an oil to reduce dusting. Use Though the individual fibers conduct heat very well, when pressed into rolls and sheets, their ability to partition air makes them excellent insulators and sound absorbers. Though not immune to the effects of a sufficiently hot fire, the fire resistance of fiberglass, stone wool, and ceramic fibers makes them common building materials when passive fire protection is required, being used as spray fireproofing, in stud cavities in drywall assemblies and as packing materials in firestops. Other uses are in resin bonded panels, as filler in compounds for gaskets, in brake pads, in plastics in the automotive industry, as a filtering medium, and as a growth medium in hydroponics. Mineral fibers are produced in the same way, without binder. The fiber as such is used as a raw material for its reinforcing purposes in various applications, such as friction materials, gaskets, plastics, and coatings. Hydroponics Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH. Conditioning methods include pre-soaking mineral wool in a nutrient solution adjusted to pH 5.5 until it stops bubbling. High-temperature mineral wool High-temperature mineral wool is used primarily for insulation and lining of industrial furnaces and foundries to improve efficiency and safety. It is also used to prevent the spread of fire. The use of high-temperature mineral wool enables a more lightweight construction of industrial furnaces and other technical equipment as compared to other methods such as fire bricks, due to its high heat resistance capabilities per weight, but has the disadvantage of being more expensive than other methods. Safety of material The International Agency for Research on Cancer (IARC) reviewed the carcinogenicity of man-made mineral fibers in October 2002. The IARC Monograph's working group concluded only the more biopersistent materials remain classified by IARC as "possibly carcinogenic to humans" (Group 2B). These include refractory ceramic fibers, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials. In contrast, the more commonly used vitreous fiber wools produced since 2000, including insulation glass wool, stone wool, and slag wool, are considered "not classifiable as to carcinogenicity in humans" (Group 3). High bio soluble fibers are produced that do not cause damage to the human cell. These newer materials have been tested for carcinogenicity and most are found to be noncarcinogenic. IARC elected not to make an overall evaluation of the newly developed fibers designed to be less bio persistent such as the alkaline earth silicate or high-alumina, low-silica wools. This decision was made in part because no human data were available, although such fibers that have been tested appear to have low carcinogenic potential in experimental animals, and because the Working Group had difficulty in categorizing these fibers into meaningful groups based on chemical composition. The European Regulation (CE) n° 1272/2008 on classification, labelling and packaging of substances and mixtures updated by the Regulation (CE) n°790/2009 does not classify mineral wool fibers as a dangerous substance if they fulfil criteria defined in its Note Q. The European Certification Board for mineral wool products, EUCEB, certify mineral wool products made of fibers fulfilling Note Q ensuring that they have a low bio persistence and so that they are quickly removed from the lung. The certification is based on independent experts' advice and regular control of the chemical composition. Due to the mechanical effect of fibers, mineral wool products may cause temporary skin itching. To diminish this and to avoid unnecessary exposure to mineral wool dust, information on good practices is available on the packaging of mineral wool products with pictograms or sentences. Safe Use Instruction Sheets similar to Safety data sheet are also available from each producer. People can be exposed to mineral wool fibers in the workplace by breathing them in, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for mineral wool fiber exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 5 mg/m3 total exposure and 3 fibers per cm3 over an 8-hour workday. Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) is a European Union regulation of 18 December 2006. REACH addresses the production and use of chemical substances, and their potential impacts on both human health and the environment. A Substance Information Exchange Forum (SIEF) has been set up for several types of mineral wool. AES, ASW and PCW have been registered before the first deadline of 1 December 2010 and can, therefore, be used on the European market. ASW/RCF is classified as carcinogen category 1B. AES is exempted from carcinogen classification based on short-term in vitro study result. PCW wools are not classified; self-classification led to the conclusion that PCW are not hazardous. On 13 January 2010, some of the aluminosilicate refractory ceramic fibers and zirconia aluminosilicate refractory ceramic fibers have been included in the candidate list of Substances of Very High Concern. In response to concerns raised with the definition and the dossier two additional dossiers were posted on the ECHA website for consultation and resulted in two additional entries on the candidate list. This actual (having four entries for one substance/group of substances) situation is contrary to the REACH procedure intended. Aside from this situation, concerns raised during the two consultation periods remain valid. Regardless of the concerns raised, the inclusion of a substance in the candidate list triggers immediately the following legal obligations of manufacturers, importers and suppliers of articles containing that substance in a concentration above 0.1% (w/w): Notification to ECHA -REACH Regulation Art. 7 Provision of Safety Data Sheet- REACH Regulation Art. 31.1 Duty to communicate safe use information or responding to customer requests -REACH Regulation Art. 33 Crystalline silica Amorphous high-temperature mineral wool (AES and ASW) is produced from a molten glass stream which is aerosolized by a jet of high-pressure air or by letting the stream impinge onto spinning wheels. The droplets are drawn into fibers; the mass of both fibers and remaining droplets cool very rapidly so that no crystalline phases may form. When amorphous high-temperature mineral wool is installed and used in high-temperature applications such as industrial furnaces, at least one face may be exposed to conditions causing the fibers to partially devitrify. Depending on the chemical composition of the glassy fiber and the time and temperature to which the materials are exposed, different stable crystalline phases may form. In after-use high-temperature mineral wool crystalline silica crystals are embedded in a matrix composed of other crystals and glasses. Experimental results on the biological activity of after-use high-temperature mineral wool have not demonstrated any hazardous activity that could be related to any form of silica they may contain. Substitutes for mineral wool in construction Due to the mineral wool non-degradability and potential health risks, substitute materials are being developed: hemp, flax, wool, wood, and cork insulations are the most prominent. Biodegradability and health profile are the main advantages of those materials. Their drawbacks when compared to mineral wool are their substantially lower mold resistance, higher combustibility, and slightly higher thermal conductivity (hemp insulation: 0.040 Wmk, mineral wool insulation: 0.030-0.045 Wmk). See also Asbestos, a mineral that is naturally fibrous Basalt fiber, a mineral fiber having high tensile strength Glass wool Pele's hair Risk and Safety Statements References External links Statistics Canada documents on shipments of mineral wool in Canada Review of published data on exposure to mineral wool during installation work by A Jones and A Sanchez Jimenez, Institute of Occupational Medicine Research Report TM/11/01 Assessment of airborne mineral wool fibres in domestic houses by J Dodgson and others. Institute of Occupational Medicine Research Report TM/87/18 Building insulation materials Materials
Mineral wool
[ "Physics" ]
2,877
[ "Materials", "Matter" ]
176,356
https://en.wikipedia.org/wiki/Urbain%20Le%20Verrier
Urbain Jean Joseph Le Verrier (; 11 March 1811 – 23 September 1877) was a French astronomer and mathematician who specialized in celestial mechanics and is best known for predicting the existence and position of Neptune using only mathematics. The calculations were made to explain discrepancies with Uranus's orbit and the laws of Kepler and Newton. Le Verrier sent the coordinates to Johann Gottfried Galle in Berlin, asking him to verify. Galle found Neptune the same night he received Le Verrier's letter, within 1° of the predicted position. The discovery of Neptune is widely regarded as a dramatic validation of celestial mechanics, and is one of the most remarkable moments of 19th-century science. Life Early years Urbain Le Verrier was born at Saint-Lô, Manche, France, to a modest bourgeois family, his parents being Louis-Baptiste Le Verrier and Marie-Jeanne-Josephine-Pauline de Baudre. He studied at the École Polytechnique – briefly chemistry, under Gay-Lussac, writing papers on the combinations of phosphorus and hydrogen, and of phosphorus and oxygen. He then switched to astronomy, particularly celestial mechanics, and accepted a job at the Paris Observatory. He spent most of his professional life there, eventually becoming director of the institution, 1854–1870 and again 1873–1877. In 1846 Le Verrier became a member of the French Academy of Sciences, and in 1855 was elected a foreign member of the Royal Swedish Academy of Sciences. His name is one of the 72 names inscribed on the Eiffel Tower. Career Early work Le Verrier's first work in astronomy was presented to the Académie des Sciences in September 1839, entitled Sur les variations séculaires des orbites des planètes (On the Secular Variations of the Orbits of the Planets). This work addressed the then most-important question in astronomy: the stability of the Solar System, first investigated by Laplace. He was able to derive some important limits on the motions of the system, but due to the inaccurately-known masses of the planets, his results were tentative. From 1844 to 1847, Le Verrier published a series of works on periodic comets, in particular those of Lexell, Faye and DeVico. He was able to show some interesting interactions with the planet Jupiter, proving that certain comets were actually the reappearance of previously-known comets flung into different orbits. Discovery of Neptune Le Verrier's most famous achievement is his prediction of the existence of the then unknown planet Neptune, using only mathematics and astronomical observations of the known planet Uranus. Encouraged by physicist Arago, Director of the Paris Observatory, Le Verrier was intensely engaged for months in complex calculations to explain small but systematic discrepancies between Uranus's observed orbit and the one predicted from the laws of gravity of Newton. At the same time, but unknown to Le Verrier, similar calculations were made by John Couch Adams in England. Le Verrier announced his final predicted position for Uranus's unseen perturbing planet publicly to the French Academy on 31 August 1846, two days before Adams's final solution was privately mailed to the Royal Greenwich Observatory. Le Verrier transmitted his own prediction by 18 September in a letter to Johann Galle of the Berlin Observatory. The letter arrived five days later, and the planet was found with the Berlin Fraunhofer refractor that same evening, 23 September 1846, by Galle and Heinrich d'Arrest within 1° of the predicted location near the boundary between Capricorn and Aquarius. There was, and to an extent still is, controversy over the apportionment of credit for the discovery. There is no ambiguity to the discovery claims of Le Verrier, Galle, and d'Arrest. Adams's work was begun earlier than Le Verrier's but was finished later and was unrelated to the actual discovery. Not even the briefest account of Adams's predicted orbital elements was published until more than a month after Berlin's visual confirmation. Adams made full public acknowledgement of Le Verrier's priority and credit (not forgetting to mention the role of Galle) when he gave his paper to the Royal Astronomical Society in November 1846: Tables of the planets Early in the 19th century, the methods of predicting the motions of the planets were somewhat scattered, having been developed over decades by many different researchers. In 1847, Le Verrier took on the task to "... embrace in a single work the entire planetary system, put everything in harmony if possible, otherwise, declare with certainty that there are as yet unknown causes of perturbations...", a work which would occupy him for the rest of his life. Le Verrier began by re-evaluating, to the 7th order, the technique of calculating the planetary perturbations known as the perturbing function. This derivation, which resulted in 469 mathematical terms, was complete by 1849. He next collected observations of the positions of the planets as far back as 1750. Examining these and correcting for inconsistencies with the most recent data occupied him until 1852. Le Verrier published, in the Annales de l'Observatoire de Paris, tables of the motions of all of the known planets, releasing them as he completed them, starting in 1858. The tables formed the fundamental ephemeris of the Connaissance des Temps, the astronomical almanac of the Bureau des Longitudes, until about 1912. About that time, Le Verrier's work on the outer planets was revised and expanded by Gaillot. Precession of Mercury Le Verrier began studying the motion of Mercury as early as 1843, with a report entitled Détermination nouvelle de l 'orbite de Mercure et de ses perturbations (A New Determination of the Orbit of Mercury and its Perturbations). In 1859, Le Verrier was the first to report that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps, instead, a series of smaller 'corpuscules') might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation. (Other explanations considered included a slight oblateness of the Sun.) The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place some faith in this possible explanation, and the hypothetical planet was even named Vulcan. However, no such planet was ever found, and the anomalous precession was eventually explained by general relativity theory. Later life Le Verrier's methods of management were disliked by the staff of the Observatoire, and the disputes became so great that he was driven out in 1870. He was succeeded by Delaunay, but was reinstated in 1873 after Delaunay accidentally drowned. Le Verrier held the position until his death in 1877. Le Verrier married Lucille Clotilde Choquet in 1837 and had 3 children. He died in Paris, France and was buried in the Montparnasse Cemetery. A large stone celestial globe sits over his grave. He will be remembered by the phrase attributed to Arago: "the man who discovered a planet with the point of his pen." In 1847, he was elected to the American Philosophical Society. Honours Gold Medal of the Royal Astronomical Society – 1868 and 1876 Namesake of craters on the Moon and Mars, a ring of Neptune, and the asteroid 1997 Leverrier One of the 72 names engraved on the Eiffel Tower See also Discovery of Neptune List of works by Henri Chapu Statue of Le Verrier Lyttleton, Raymond Arthur, (1968) Mysteries of the Solar System, Clarendon, Oxford, UK (1968), Chapter 7: The discovery of Neptune References Further reading . . . . . . External links Le Verrier on the French 50 Franc banknote Obituary – Nature, 1877, vol. 16, p. 453 Interesting interview with M. LeVerrier, director of the Paris Observatory – New York Herald, 14 April 1877, p. 7 Archived at Ghostarchive and the Wayback Machine: Virtual exhibition on Paris Observatory digital library Le Verrier's works digitalized on Paris Observatory digital library 1811 births 1877 deaths People from Saint-Lô École Polytechnique alumni Burials at Montparnasse Cemetery 19th-century French astronomers French Roman Catholics 19th-century French mathematicians Lycée Louis-le-Grand alumni Members of the French Academy of Sciences Members of the Royal Swedish Academy of Sciences Foreign members of the Royal Society Neptune Recipients of the Gold Medal of the Royal Astronomical Society Recipients of the Copley Medal Discoverers of astronomical objects
Urbain Le Verrier
[ "Astronomy" ]
1,796
[ "Astronomers", "Astronomical objects", "Discoverers of astronomical objects" ]
176,388
https://en.wikipedia.org/wiki/Internet%20Storm%20Center
The Internet Storm Center (ISC) is a program of the SANS Technology Institute, a branch of the SANS Institute which monitors the level of malicious activity on the Internet, particularly with regard to large-scale infrastructure events. History The ISC evolved from "Incidents.org", a site initially founded by the SANS Institute to assist in the public-private sector cooperation during the Y2K cutover. In 2000, Incidents.org started to cooperate with DShield to create a Consensus Incidents Database (CID). It collected security information from cooperating sites and agencies for mass analysis. On March 22, 2001, the SANS CID was responsible for the early detection of the "Lion" worm attacks on various facilities. The quick warning and counter-efforts organized by the CID were instrumental in controlling the damage done by this worm, which otherwise might have been considerably worse. Later, DShield was integrated closer into incidents.org as the SANS Institute started to sponsor DShield. The CID was renamed the "Internet Storm Center" in acknowledgement of the way it uses the distributed sensor network similar to the way a weather reporting center will detect and track an atmospheric storm and provide warnings. Since that time the ISC has expanded its monitoring operations; its website cites a figure of over twenty million "intrusion detection log entries" per day. It continues to provide analyses and alerts of security threats to the Internet community. During the last hours of 2005 and the first weeks of 2006, the Internet Storm Center went to its longest period at the time to "yellow" on the Infocon for the WMF vulnerability. The most prominent feature of the ISC is a daily "Handler Diary" which is prepared by one of the 40 volunteer incident handlers and summarized the events of the day. It frequently is the first public source for new attack trends and actively facilitates cooperation by soliciting more information to understand particular attacks better. The Internet Storm Center is currently staffed with approximately 40 volunteers, representing 8 countries and many industries. Notable members Director of the ISC: Marcus Sachs Chief Technical Officer: Johannes Ullrich Handler Tom Liston External links Internet Storm Center webpage SANS Technology Institute Infocon The Repository of Industrial Security Incidents Computing websites Internet security
Internet Storm Center
[ "Technology" ]
451
[ "Computing websites" ]
176,399
https://en.wikipedia.org/wiki/Zeeman%20effect
The Zeeman effect ( , ) is the splitting of a spectral line into several components in the presence of a static magnetic field. It is caused by interaction of the magnetic field with the magnetic moment of the atomic electron associated to its orbital motion and spin; this interaction shifts some orbital energies more than others, resulting in the split spectrum. The effect is named after the Dutch physicist Pieter Zeeman, who discovered it in 1896 and received a Nobel Prize in Physics for this discovery. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules. Since the distance between the Zeeman sub-levels is a function of magnetic field strength, this effect can be used to measure magnetic field strength, e.g. that of the Sun and other stars or in laboratory plasmas. Discovery In 1896 Zeeman learned that his laboratory had one of Henry Augustus Rowland's highest resolving diffraction gratings. Zeeman had read James Clerk Maxwell's article in Encyclopædia Britannica describing Michael Faraday's failed attempts to influence light with magnetism. Zeeman wondered if the new spectrographic techniques could succeed where early efforts had not. When illuminated by a slit shaped source, the grating produces a long array of slit images corresponding to different wavelengths. Zeeman placed a piece of asbestos soaked in salt water into a Bunsen burner flame at the source of the grating: he could easily see two lines for sodium light emission. Energizing a 10 kilogauss magnet around the flame he observed a slight broadening of the sodium images. When Zeeman switched to cadmium as the source he observed the images split when the magnet was energized. These splittings could be analyzed with Hendrik Lorentz's then-new electron theory. In retrospect we now know that the magnetic effects on sodium require quantum mechanical treatment. Zeeman and Lorentz were awarded the 1902 Nobel prize; in his acceptance speech Zeeman explained his apparatus and showed slides of the spectrographic images. Nomenclature Historically, one distinguishes between the normal and an anomalous Zeeman effect (discovered by Thomas Preston in Dublin, Ireland). The anomalous effect appears on transitions where the net spin of the electrons is non-zero. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect. Wolfgang Pauli recalled that when asked by a colleague as to why he looked unhappy, he replied, "How can one look happy when he is thinking about the anomalous Zeeman effect?" At higher magnetic field strength the effect ceases to be linear. At even higher field strengths, comparable to the strength of the atom's internal field, the electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen–Back effect. In the modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect". Another rarely used obscure term is inverse Zeeman effect, referring to the Zeeman effect in an absorption spectral line. A similar effect, splitting of the nuclear energy levels in the presence of a magnetic field, is referred to as the nuclear Zeeman effect. Theoretical presentation The total Hamiltonian of an atom in a magnetic field is where is the unperturbed Hamiltonian of the atom, and is the perturbation due to the magnetic field: where is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore, where is the Bohr magneton, is the total electronic angular momentum, and is the Landé g-factor. A more accurate approach is to take into account that the operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum and the spin angular momentum , with each multiplied by the appropriate gyromagnetic ratio: where and (the latter is called the anomalous gyromagnetic ratio; the deviation of the value from 2 is due to the effects of quantum electrodynamics). In the case of the LS coupling, one can sum over all electrons in the atom: where and are the total spin momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum. If the interaction term is small (less than the fine structure), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen–Back effect, described below, exceeds the LS coupling significantly (but is still small compared to ). In ultra-strong magnetic fields, the magnetic-field interaction may exceed , in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are intermediate cases which are more complex than these limit cases. Weak field (Zeeman effect) If the spin–orbit interaction dominates over the effect of the external magnetic field, and are not separately conserved, only the total angular momentum is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector . The (time-)"averaged" spin vector is then the projection of the spin onto the direction of : and for the (time-)"averaged" orbital vector: Thus, Using and squaring both sides, we get and: using and squaring both sides, we get Combining everything and taking , we obtain the magnetic potential energy of the atom in the applied external magnetic field, where the quantity in square brackets is the Landé g-factor gJ of the atom ( and ) and is the z-component of the total angular momentum. For a single electron above filled shells and , the Landé g-factor can be simplified into: Taking to be the perturbation, the Zeeman correction to the energy is Example: Lyman-alpha transition in hydrogen The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions and In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1S1/2 and 2P1/2 levels into 2 states each () and the 2P3/2 level into 4 states (). The Landé g-factors for the three levels are: for (j=1/2, l=0) for (j=1/2, l=1) for (j=3/2, l=1). Note in particular that the size of the energy splitting is different for the different orbitals, because the gJ values are different. On the left, fine structure splitting is depicted. This splitting occurs even in the absence of a magnetic field, as it is due to spin–orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields. Strong field (Paschen–Back effect) The Paschen–Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently strong to disrupt the coupling between orbital () and spin () angular momenta. This effect is the strong-field limit of the Zeeman effect. When , the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back. When the magnetic-field perturbation significantly exceeds the spin–orbit interaction, one can safely assume . This allows the expectation values of and to be easily evaluated for a state . The energies are simply The above may be read as implying that the LS-coupling is completely broken by the external field. However and are still "good" quantum numbers. Together with the selection rules for an electric dipole transition, i.e., this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the selection rule. The splitting is independent of the unperturbed energies and electronic configurations of the levels being considered. More precisely, if , each of these three components is actually a group of several transitions due to the residual spin–orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure'). The first-order perturbation theory with these corrections yields the following formula for the hydrogen atom in the Paschen–Back limit: Example: Lyman-alpha transition in hydrogen In this example, the fine-structure corrections are ignored. Intermediate field for j = 1/2 In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is where is the hyperfine splitting (in Hz) at zero applied magnetic field, and are the Bohr magneton and nuclear magneton respectively, and are the electron and nuclear angular momentum operators and is the Landé g-factor: In the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the basis. In the high field regime, the magnetic field becomes so strong that the Zeeman effect will dominate, and one must use a more complete basis of or just since and will be constant within a given level. To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the and basis states. For , the Hamiltonian can be solved analytically, resulting in the Breit–Rabi formula (named after Gregory Breit and Isidor Isaac Rabi). Notably, the electric quadrupole interaction is zero for (), so this formula is fairly accurate. We now utilize quantum mechanical ladder operators, which are defined for a general angular momentum operator as These ladder operators have the property as long as lies in the range (otherwise, they return zero). Using ladder operators and We can rewrite the Hamiltonian as We can now see that at all times, the total angular momentum projection will be conserved. This is because both and leave states with definite and unchanged, while and either increase and decrease or vice versa, so the sum is always unaffected. Furthermore, since there are only two possible values of which are . Therefore, for every value of there are only two possible states, and we can define them as the basis: This pair of states is a two-level quantum mechanical system. Now we can determine the matrix elements of the Hamiltonian: Solving for the eigenvalues of this matrix – as can be done by hand (see two-level quantum mechanical system), or more easily, with a computer algebra system – we arrive at the energy shifts: where is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field , is referred to as the 'field strength parameter' (Note: for the expression under the square root is an exact square, and so the last term should be replaced by ). This equation is known as the Breit–Rabi formula and is useful for systems with one valence electron in an () level. Note that index in should be considered not as total angular momentum of the atom but as asymptotic total angular momentum. It is equal to total angular momentum only if otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different but equal (the only exceptions are ). Applications Astrophysics George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of 0.1 tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the Sun, and to analyse the magnetic field geometries in other stars. Laser cooling The Zeeman effect is utilized in many laser cooling applications such as a magneto-optical trap and the Zeeman slower. Spintronics Zeeman-energy mediated coupling of spin and orbital motions is used in spintronics for controlling electron spins in quantum dots through electric dipole spin resonance. Metrology Old high-precision frequency standards, i.e. hyperfine structure transition-based atomic clocks, may require periodic fine-tuning due to exposure to magnetic fields. This is carried out by measuring the Zeeman effect on specific hyperfine structure transition levels of the source element (cesium) and applying a uniformly precise, low-strength magnetic field to said source, in a process known as degaussing. The Zeeman effect may also be utilized to improve accuracy in atomic absorption spectroscopy. Biology A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. Nuclear spectroscopy The nuclear Zeeman effect is important in such applications as nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and Mössbauer spectroscopy. Other The electron spin resonance spectroscopy is based on the Zeeman effect. Demonstrations The Zeeman effect can be demonstrated by placing a sodium vapor source in a powerful electromagnet and viewing a sodium vapor lamp through the magnet opening (see diagram). With magnet off, the sodium vapor source will block the lamp light; when the magnet is turned on the lamp light will be visible through the vapor. The sodium vapor can be created by sealing sodium metal in an evacuated glass tube and heating it while the tube is in the magnet. Alternatively, salt (sodium chloride) on a ceramic stick can be placed in the flame of Bunsen burner as the sodium vapor source. When the magnetic field is energized, the lamp image will be brighter. However, the magnetic field also affects the flame, making the observation depend upon more than just the Zeeman effect. These issues also plagued Zeeman's original work; he devoted considerable effort to ensure his observations were truly an effect of magnetism on light emission. When salt is added to the Bunsen burner, it dissociates to give sodium and chloride. The sodium atoms get excited due to photons from the sodium vapour lamp, with electrons excited from 3s to 3p states, absorbing light in the process. The sodium vapour lamp emits light at 589nm, which has precisely the energy to excite an electron of a sodium atom. If it was an atom of another element, like chlorine, shadow will not be formed. When a magnetic field is applied, due to the Zeeman effect the spectral line of sodium gets split into several components. This means the energy difference between the 3s and 3p atomic orbitals will change. As the sodium vapour lamp don't precisely deliver the right frequency any more, light doesn't get absorbed and passes through, resulting in the shadow dimming. As the magnetic field strength is increased, the shift in the spectral lines increases and lamp light is transmitted. See also Magneto-optic Kerr effect Voigt effect Faraday effect Cotton–Mouton effect Polarization spectroscopy Zeeman energy Stark effect Lamb shift References Historical (Chapter 16 provides a comprehensive treatment, as of 1935.) Modern External links Zeeman effect-Control light with magnetic fields Spectroscopy Quantum magnetism Foundational quantum physics Articles containing video clips Magneto-optic effects
Zeeman effect
[ "Physics", "Chemistry", "Materials_science" ]
3,179
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Foundational quantum physics", "Quantum mechanics", "Electric and magnetic fields in matter", "Optical phenomena", "Quantum magnetism", "Condensed matter physics", "Magneto-optic effects", "Spe...
176,424
https://en.wikipedia.org/wiki/Galvanic%20anode
A galvanic anode, or sacrificial anode, is the main component of a galvanic cathodic protection system used to protect buried or submerged metal structures from corrosion. They are made from a metal alloy with a more "active" voltage (more negative reduction potential / more positive oxidation potential) than the metal of the structure. The difference in potential between the two metals means that the galvanic anode corrodes, in effect being "sacrificed" in order to protect the structure. Theory In brief, corrosion is a chemical reaction occurring by an electrochemical mechanism (a redox reaction). During corrosion of iron or steel there are two reactions, oxidation (equation ), where electrons leave the metal (and the metal dissolves, i.e. actual loss of metal results) and reduction, where the electrons are used to convert oxygen and water to hydroxide ions (equation ): In most environments, the hydroxide ions and ferrous ions combine to form ferrous hydroxide, which eventually becomes the familiar brown rust: As corrosion takes place, oxidation and reduction reactions occur and electrochemical cells are formed on the surface of the metal so that some areas will become anodic (oxidation) and some cathodic (reduction). Electrons flow from the anodic areas into the electrolyte as the metal corrodes. Conversely, as electrons flow from the electrolyte to the cathodic areas, the rate of corrosion is reduced. (The flow of electrons is in the opposite direction of the flow of electric current.) As the metal continues to corrode, the local potentials on the surface of the metal will change and the anodic and cathodic areas will change and move. As a result, in ferrous metals, a general covering of rust is formed over the whole surface, which will eventually consume all the metal. This is rather a simplified view of the corrosion process, because it can occur in several different forms. Prevention of corrosion by cathodic protection (CP) works by introducing another metal (the galvanic anode) with a much more anodic surface, so that all the current will flow from the introduced anode and the metal to be protected becomes cathodic in comparison to the anode. This effectively stops the oxidation reactions on the metal surface by transferring them to the galvanic anode, which will be sacrificed in favour of the structure under protection. More simply put, this takes advantage of the relatively low stability of magnesium, aluminum or zinc metals; they dissolve instead of iron because their bonding is weaker compared to iron, which is bonded strongly via its partially filled d-orbitals. For this protection to work there must be an electron pathway between the anode and the metal to be protected (e.g., a wire or direct contact) and an ion pathway between both the oxidizing agent (e.g., oxygen and water or moist soil) and the anode, and the oxidizing agent and the metal to be protected, thus forming a closed circuit; therefore simply bolting a piece of active metal such as zinc to a less active metal, such as mild steel, in air (a poor ionic conductor) will not furnish any protection. Anode materials There are three main metals used as galvanic anodes: magnesium, aluminum and zinc. They are all available as blocks, rods, plates or extruded ribbon. Each material has advantages and disadvantages. Magnesium has the most negative electropotential of the three (see galvanic series) and is more suitable for areas where the electrolyte (soil or water) resistivity is higher. This is usually on-shore pipelines and other buried structures, although it is also used on boats in fresh water and in water heaters. In some cases, the negative potential of magnesium can be a disadvantage: if the potential of the protected metal becomes too negative, reduction of water or solvated protons may evolve hydrogen atoms on the cathode surface, for instance according to leading to hydrogen embrittlement or to disbonding of the coating. Where this is a concern, zinc anodes may be used. An aluminum-zinc-tin alloy called KA90 is commonly used in marine and water heater applications. Zinc and aluminium are generally used in salt water, where the resistivity is generally lower and magnesium dissolves relatively quickly by reaction with water under hydrogen evolution (self-corrosion). Typical uses are for the hulls of ships and boats, offshore pipelines and production platforms, in salt-water-cooled marine engines, on small boat propellers and rudders, and for the internal surface of storage tanks. Zinc is considered a reliable material, but is not suitable for use at higher temperatures, as it tends to passivate (the oxide layer formed shields from further oxidation); if this happens, current may cease to flow and the anode stops working. Zinc has a relatively low driving voltage, which means in higher-resistivity soils or water it may not be able to provide sufficient current. However, in some circumstances — where there is a risk of hydrogen embrittlement, for example — this lower voltage is advantageous, as overprotection is avoided. Aluminium anodes have several advantages, such as a lighter weight, and much higher capacity than zinc. However, their electrochemical behavior is not considered as reliable as zinc, and greater care must be taken in how they are used. Aluminium anodes will passivate where chloride concentration is below 1,446 parts per million. One disadvantage of aluminium is that if it strikes a rusty surface, a large thermite spark may be generated, so its use is restricted in tanks where there may be explosive atmospheres and there is a risk of the anode falling. Since the operation of a galvanic anode relies on the difference in electropotential between the anode and the cathode, practically any metal can be used to protect some other, providing there is a sufficient difference in potential. For example, iron anodes can be used to protect copper. Design considerations The design of a galvanic anode CP system should consider many factors, including the type of structure, the resistivity of the electrolyte (soil or water) it will operate in, the type of coating and the service life. The primary calculation is how much anode material will be required to protect the structure for the required time. Too little material may provide protection for a while, but need to be replaced regularly. Too much material would provide protection at an unnecessary cost. The mass in kg is given by equation (). The design life is in years (1 year = 8760 hours). The utilisation factor (UF) of the anode is a constant value, depending on the shape of the anode and how it is attached, which signifies how much of the anode can be consumed before it ceases to be effective. A value of 0.8 indicates that 80% of the anode can be consumed, before it should be replaced. A long slender stand off anode (installed on legs to keep the anode away from the structure) has a UF value of 0.9, whereas the UF of a short, flush mounted anode is 0.8. Anode capacity is an indication of how much material is consumed as current flows over time. The value for zinc in seawater is 780 Ah/kg but aluminium is 2000 Ah/kg, which reflects the lower atomic mass of aluminium and means that, in theory, aluminium can produce much more current per weight than zinc before being depleted and this is one of the factors to consider when choosing a particular material. The amount of current required corresponds directly to the surface area of the metal exposed to the soil or water, so the application of a coating drastically reduces the mass of anode material required. The better the coating, the less anode material is needed. Once the required mass of material is known, the particular type of anode is chosen. Differently shaped anodes will have a different resistance to earth, which governs how much current can be produced, so the resistance of the anode is calculated to ensure that sufficient current will be available. If the resistance of the anode is too high, either a differently shaped or sized anode is chosen, or a greater quantity of anodes must be used. The arrangement of the anodes is then planned so as to provide an even distribution of current over the whole structure. For example, if a particular design shows that a pipeline long needs 10 anodes, then approximately one anode per kilometre would be more effective than putting all 10 anodes at one end or in the centre. Advantages and disadvantages Advantages No external power sources required. Relatively easy to install. Lower voltages and current mean that risk of causing stray current interference on other structures is low. Require less frequent monitoring than impressed current CP systems. Relatively low risk of overprotection. Once installed, testing the system components is relatively simple for trained personnel. Disadvantages Current capacity limited by anode mass and self consumption at low current density. Lower driving voltage means the anodes may not work in high-resistivity environments. Often requires that the protected structure be electrically isolated from other structures and ground. Anodes are heavy and will increase water resistance on moving structures or pipe interiors. Where D.C. power is available, electrical energy can be obtained more cheaply than by galvanic anodes. Where large arrays are used, wiring is needed due to high current flow and need to keep resistance losses low. Anodes must be carefully placed to avoid interfering with water flow into the propeller. To retain effectiveness, the anodes must be inspected and/or replaced as part of normal maintenance. Cost effectiveness As the anode materials used are generally more costly than iron, using this method to protect ferrous metal structures may not appear to be particularly cost effective. However, consideration should also be given to the costs incurred to repair a corroded hull or to replace a steel pipeline or tank because their structural integrity has been compromised by corrosion. However, there is a limit to the cost effectiveness of a galvanic system. On larger structures, such as long pipelines, so many anodes may be needed that it would be more cost-effective to install impressed current cathodic protection. Production of sacrificial anodes The basic method is to produce sacrificial anodes through a casting process. However, two casting methods can be distinguished. The high pressure die-casting process for sacrificial anodes is widespread. It is a fully automated machine process. In order for the manufacturing process to run reliably and in a repeatable manner, a modification of the processed sacrificial anode alloy is required. Alternatively, the gravity casting process is used for the production of the sacrificial anodes. This process is performed manually or partially automated. The alloy does not have to be adapted to the manufacturing process, but is designed for 100% optimum corrosion protection. See also Galvanic corrosion Notes References A. W. Peabody, Peabody's Control of Pipeline Corrosion, 2nd ed., 2001, NACE International. Shreir L. L. et al., Corrosion Vol. 2, 3rd ed., 1994, Baeckmann, Schwenck, Prinz. Handbook of Cathodic Corrosion Protection, 3rd ed. 1997. Det Norske Veritas Recommended Practice for Cathodic Protection Design DNV RP-B401-2005 External links Corrosion prevention Electrochemistry
Galvanic anode
[ "Chemistry" ]
2,382
[ "Electrochemistry", "Corrosion prevention", "Corrosion" ]
176,427
https://en.wikipedia.org/wiki/Neutral%20theory%20of%20molecular%20evolution
The neutral theory of molecular evolution holds that most evolutionary changes occur at the molecular level, and most of the variation within and between species are due to random genetic drift of mutant alleles that are selectively neutral. The theory applies only for evolution at the molecular level, and is compatible with phenotypic evolution being shaped by natural selection as postulated by Charles Darwin. The neutral theory allows for the possibility that most mutations are deleterious, but holds that because these are rapidly removed by natural selection, they do not make significant contributions to variation within and between species at the molecular level. A neutral mutation is one that does not affect an organism's ability to survive and reproduce. The neutral theory assumes that most mutations that are not deleterious are neutral rather than beneficial. Because only a fraction of gametes are sampled in each generation of a species, the neutral theory suggests that a mutant allele can arise within a population and reach fixation by chance, rather than by selective advantage. The theory was introduced by the Japanese biologist Motoo Kimura in 1968, and independently by two American biologists Jack Lester King and Thomas Hughes Jukes in 1969, and described in detail by Kimura in his 1983 monograph The Neutral Theory of Molecular Evolution. The proposal of the neutral theory was followed by an extensive "neutralist–selectionist" controversy over the interpretation of patterns of molecular divergence and gene polymorphism, peaking in the 1970s and 1980s. Neutral theory is frequently used as the null hypothesis, as opposed to adaptive explanations, for describing the emergence of morphological or genetic features in organisms and populations. This has been suggested in a number of areas, including in explaining genetic variation between populations of one nominal species, the emergence of complex subcellular machinery, and the convergent emergence of several typical microbial morphologies. Origins While some scientists, such as Freese (1962) and Freese and Yoshida (1965), had suggested that neutral mutations were probably widespread, the original mathematical derivation of the theory had been published by R.A. Fisher in 1930. Fisher, however, gave a reasoned argument for believing that, in practice, neutral gene substitutions would be very rare. A coherent theory of neutral evolution was first proposed by Motoo Kimura in 1968 and by King and Jukes independently in 1969. Kimura initially focused on differences among species; King and Jukes focused on differences within species. Many molecular biologists and population geneticists also contributed to the development of the neutral theory. The principles of population genetics, established by J.B.S. Haldane, R.A. Fisher, and Sewall Wright, created a mathematical approach to analyzing gene frequencies that contributed to the development of Kimura's theory. Haldane's dilemma regarding the cost of selection was used as motivation by Kimura. Haldane estimated that it takes about 300 generations for a beneficial mutation to become fixed in a mammalian lineage, meaning that the number of substitutions (1.5 per year) in the evolution between humans and chimpanzees was too high to be explained by beneficial mutations. Functional constraint The neutral theory holds that as functional constraint diminishes, the probability that a mutation is neutral rises, and so should the rate of sequence divergence. When comparing various proteins, extremely high evolutionary rates were observed in proteins such as fibrinopeptides and the C chain of the proinsulin molecule, which both have little to no functionality compared to their active molecules. Kimura and Ohta also estimated that the alpha and beta chains on the surface of a hemoglobin protein evolve at a rate almost ten times faster than the inside pockets, which would imply that the overall molecular structure of hemoglobin is less significant than the inside where the iron-containing heme groups reside. There is evidence that rates of nucleotide substitution are particularly high in the third position of a codon, where there is little functional constraint. This view is based in part on the degenerate genetic code, in which sequences of three nucleotides (codons) may differ and yet encode the same amino acid (GCC and GCA both encode alanine, for example). Consequently, many potential single-nucleotide changes are in effect "silent" or "unexpressed" (see synonymous or silent substitution). Such changes are presumed to have little or no biological effect. Quantitative theory Kimura also developed the infinite sites model (ISM) to provide insight into evolutionary rates of mutant alleles. If were to represent the rate of mutation of gametes per generation of individuals, each with two sets of chromosomes, the total number of new mutants in each generation is . Now let represent the evolution rate in terms of a mutant allele becoming fixed in a population. According to ISM, selectively neutral mutations appear at rate in each of the copies of a gene, and fix with probability . Because any of the genes have the ability to become fixed in a population, is equal to , resulting in the rate of evolutionary rate equation: This means that if all mutations were neutral, the rate at which fixed differences accumulate between divergent populations is predicted to be equal to the per-individual mutation rate, independent of population size. When the proportion of mutations that are neutral is constant, so is the divergence rate between populations. This provides a rationale for the molecular clock, which predated neutral theory. The ISM also demonstrates a constancy that is observed in molecular lineages. This stochastic process is assumed to obey equations describing random genetic drift by means of accidents of sampling, rather than for example genetic hitchhiking of a neutral allele due to genetic linkage with non-neutral alleles. After appearing by mutation, a neutral allele may become more common within the population via genetic drift. Usually, it will be lost, or in rare cases it may become fixed, meaning that the new allele becomes standard in the population. According to the neutral theory of molecular evolution, the amount of genetic variation within a species should be proportional to the effective population size. The "neutralist–selectionist" debate A heated debate arose when Kimura's theory was published, largely revolving around the relative percentages of polymorphic and fixed alleles that are "neutral" versus "non-neutral". A genetic polymorphism means that different forms of particular genes, and hence of the proteins that they produce, are co-existing within a species. Selectionists claimed that such polymorphisms are maintained by balancing selection, while neutralists view the variation of a protein as a transient phase of molecular evolution. Studies by Richard K. Koehn and W. F. Eanes demonstrated a correlation between polymorphism and molecular weight of their molecular subunits. This is consistent with the neutral theory assumption that larger subunits should have higher rates of neutral mutation. Selectionists, on the other hand, contribute environmental conditions to be the major determinants of polymorphisms rather than structural and functional factors. According to the neutral theory of molecular evolution, the amount of genetic variation within a species should be proportional to the effective population size. Levels of genetic diversity vary much less than census population sizes, giving rise to the "paradox of variation" . While high levels of genetic diversity were one of the original arguments in favor of neutral theory, the paradox of variation has been one of the strongest arguments against neutral theory. There are a large number of statistical methods for testing whether neutral theory is a good description of evolution (e.g., McDonald-Kreitman test), and many authors claimed detection of selection. Some researchers have nevertheless argued that the neutral theory still stands, while expanding the definition of neutral theory to include background selection at linked sites. Nearly neutral theory Tomoko Ohta also emphasized the importance of nearly neutral mutations, in particularly slightly deleterious mutations. The Nearly neutral theory stems from the prediction of neutral theory that the balance between selection and genetic drift depends on effective population size. Nearly neutral mutations are those that carry selection coefficients less than the inverse of twice the effective population size. The population dynamics of nearly neutral mutations are only slightly different from those of neutral mutations unless the absolute magnitude of the selection coefficient is greater than 1/N, where N is the effective population size in respect of selection. The effective population size affects whether slightly deleterious mutations can be treated as neutral or as deleterious. In large populations, selection can decrease the frequency of slightly deleterious mutations, therefore acting as if they are deleterious. However, in small populations, genetic drift can more easily overcome selection, causing slightly deleterious mutations to act as if they are neutral and drift to fixation or loss. Constructive neutral evolution The groundworks for the theory of constructive neutral evolution (CNE) was laid by two papers in the 1990s. Constructive neutral evolution is a theory which suggests that complex structures and processes can emerge through neutral transitions. Although a separate theory altogether, the emphasis on neutrality as a process whereby neutral alleles are randomly fixed by genetic drift finds some inspiration from the earlier attempt by the neutral theory to invoke its importance in evolution. Conceptually, there are two components A and B (which may represent two proteins) which interact with each other. A, which performs a function for the system, does not depend on its interaction with B for its functionality, and the interaction itself may have randomly arisen in an individual with the ability to disappear without an effect on the fitness of A. This present yet currently unnecessary interaction is therefore called an "excess capacity" of the system. However, a mutation may occur which compromises the ability of A to perform its function independently. However, the A:B interaction that has already emerged sustains the capacity of A to perform its initial function. Therefore, the emergence of the A:B interaction "presuppresses" the deleterious nature of the mutation, making it a neutral change in the genome that is capable of spreading through the population via random genetic drift. Hence, A has gained a dependency on its interaction with B. In this case, the loss of B or the A:B interaction would have a negative effect on fitness and so purifying selection would eliminate individuals where this occurs. While each of these steps are individually reversible (for example, A may regain the capacity to function independently or the A:B interaction may be lost), a random sequence of mutations tends to further reduce the capacity of A to function independently and a random walk through the dependency space may very well result in a configuration in which a return to functional independence of A is far too unlikely to occur, which makes CNE a one-directional or "ratchet-like" process. CNE, which does not invoke adaptationist mechanisms for the origins of more complex systems (which involve more parts and interactions contributing to the whole), has seen application in the understanding of the evolutionary origins of the spliceosomal eukaryotic complex, RNA editing, additional ribosomal proteins beyond the core, the emergence of long-noncoding RNA from junk DNA, and so forth. In some cases, ancestral sequence reconstruction techniques have afforded the ability for experimental demonstration of some proposed examples of CNE, as in heterooligomeric ring protein complexes in some fungal lineages. CNE has also been put forwards as the null hypothesis for explaining complex structures, and thus adaptationist explanations for the emergence of complexity must be rigorously tested on a case-by-case basis against this null hypothesis prior to acceptance. Grounds for invoking CNE as a null include that it does not presume that changes offered an adaptive benefit to the host or that they were directionally selected for, while maintaining the importance of more rigorous demonstrations of adaptation when invoked so as to avoid the excessive flaws of adaptationism criticized by Gould and Lewontin. Empirical evidence for the neutral theory Predictions derived from the neutral theory are generally supported in studies of molecular evolution. One of corollaries of the neutral theory is that the efficiency of positive selection is higher in populations or species with higher effective population sizes. This relationship between the effective population size and selection efficiency was evidenced by genomic studies of species including chimpanzee and human and domesticated species. In small populations (e.g., a population bottleneck during a speciation event), slightly deleterious mutations should accumulate. Data from various species supports this prediction in that the ratio of nonsynonymous to synonymous nucleotide substitutions between species generally exceeds that within species. In addition, nucleotide and amino acid substitutions generally accumulate over time in a linear fashion, which is consistent with neutral theory. Arguments against the neutral theory cite evidence of widespread positive selection and selective sweeps in genomic data. Empirical support for the neutral theory may vary depending on the type of genomic data studied and the statistical tools used to detect positive selection. For example, Bayesian methods for the detection of selected codon sites and McDonald-Kreitman tests have been criticized for their rate of erroneous identification of positive selection. See also Adaptive evolution in the human genome Coalescent theory Evolution of biological complexity Masatoshi Nei Molecular evolution Tomoko Ohta Unified neutral theory of biodiversity References External links Misconceptions about natural selection and adaptation: the neutral theory at http://evolution.berkeley.edu. Population genetics Molecular evolution Neutral theory
Neutral theory of molecular evolution
[ "Chemistry", "Biology" ]
2,728
[ "Evolutionary processes", "Molecular evolution", "Neutral theory", "Molecular biology", "Non-Darwinian evolution", "Biology theories" ]
176,472
https://en.wikipedia.org/wiki/Smurf%20attack
A Smurf attack is a distributed denial-of-service attack in which large numbers of Internet Control Message Protocol (ICMP) packets with the intended victim's spoofed source IP are broadcast to a computer network using an IP broadcast address. Most devices on a network will, by default, respond to this by sending a reply to the source IP address. If the number of machines on the network that receive and respond to these packets is very large, the victim's computer will be flooded with traffic. This can slow down the victim's computer to the point where it becomes impossible to work on. History The original tool for creating a Smurf attack was written by Dan Moschuk (alias TFreak) in 1997. In the late 1990s, many IP networks would participate in Smurf attacks if prompted (that is, they would respond to ICMP requests sent to broadcast addresses). The name comes from the idea of very small, but numerous attackers overwhelming a much larger opponent (see Smurfs). Today, administrators can make a network immune to such abuse; therefore, very few networks remain vulnerable to Smurf attacks. Method A Smurf amplifier is a computer network that lends itself to being used in a Smurf attack. Smurf amplifiers act to worsen the severity of a Smurf attack because they are configured in such a way that they generate a large number of ICMP replies to the victim at the spoofed source IP address. In DDoS, amplification is the degree of bandwidth enhancement that an original attack traffic undergoes (with the help of Smurf amplifiers) during its transmission towards the victim computer. An amplification factor of 100, for example, means that an attacker could manage to create 100 Mb/s of traffic using just 1 Mb/s of its own bandwidth. Under the assumption no countermeasures are taken to dampen the effect of a Smurf attack, this is what happens in the target network with n active hosts (that will respond to ICMP echo requests). The ICMP echo request packets have a spoofed source address (the Smurfs' target) and a destination address (the patsy; the apparent source of the attack). Both addresses can take two forms: unicast and broadcast. The dual unicast form is comparable with a regular ping: an ICMP echo request is sent to the patsy (a single host), which sends a single ICMP echo reply (a Smurf) back to the target (the single host in the source address). This type of attack has an amplification factor of 1, which means: just a single Smurf per ping. When the target is a unicast address and the destination is the broadcast address of the target's network, then all hosts in the network will receive an echo request. In return they will each reply to the target, so the target is swamped with n Smurfs. Amplification factor = n. If n is small, a host may be hindered but not crippled. If n is large, a host may come to a halt. If the target is the broadcast address and the patsy a unicast address, each host in the network will receive a single Smurf per ping, so an amplification factor of 1 per host, but a factor of n for the network. Generally, a network would be able to cope with this form of the attack, if n is not too great. When both the source and destination address in the original packet are set to the broadcast address of the target network, things start to get out of hand quickly. All hosts receive an echo request, but all replies to that are broadcast again to all hosts. Each host will receive an initial ping, broadcast the reply and get a reply from all n-1 hosts. An amplification factor of n for a single host, but an amplification factor of n2 for the network. ICMP echo requests are typically sent once a second. The reply should contain the contents of the request; a few bytes, normally. A single (double broadcast) ping to a network with 100 hosts causes the network to process packets. If the payload of the ping is increased to bytes (or 10 full packets in Ethernet) then that ping will cause the network to have to process large packets per second. Send more packets per second, and any network would collapse under the load. This will render any host in the network unreachable for as long as the attack lasts. Effect A Smurf attack can overwhelm servers and networks. The bandwidth of the communication network can be exhausted resulting in the communication network becoming paralyzed. Mitigation The fix is two-fold: Configure hosts and routers to ignore packets where the destination address is a broadcast address; and Configure routers to not forward packets directed to broadcast addresses. Until 1999, standards required routers to forward such packets by default. Since then, the default standard was changed to not forward such packets. It's also important for ISPs to implement ingress filtering, which rejects the attacking packets on the basis of the forged source address. Mitigation on a Cisco router An example of configuring a router so it will not forward packets to broadcast addresses, for a Cisco router, is: (This example does not protect a network from becoming the target of a Smurf attack; it merely prevents the network from participating in a Smurf attack.) Fraggle attack A Fraggle attack (named for the creatures in the puppet TV series Fraggle Rock) is a variation of a Smurf attack where an attacker sends a large amount of UDP traffic to ports 7 (Echo) and 19 (CHARGEN). It works similarly to the Smurf attack in that many computers on the network will respond to this traffic by sending traffic back to the spoofed source IP of the victim, flooding it with traffic. , the source code of the attack, was also released by TFreak. See also Denial-of-service attack Ping flood Smurf Amplifier Registry References External links The Latest In Denial Of Service Attacks: "Smurfing", Craig A. Huegen, 1997. Denial-of-service attacks
Smurf attack
[ "Technology" ]
1,297
[ "Denial-of-service attacks", "Computer security exploits" ]
176,478
https://en.wikipedia.org/wiki/Riemann%20sum
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations. The sum is calculated by partitioning the region into shapes (rectangles, trapezoids, parabolas, or cubics—sometimes infinitesimally small) that together form a region that is similar to the region being measured, then calculating the area for each of these shapes, and finally adding all of these small areas together. This approach can be used to find a numerical approximation for a definite integral even if the fundamental theorem of calculus does not make it easy to find a closed-form solution. Because the region by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral. Definition Let be a function defined on a closed interval of the real numbers, , and as a partition of , that is A Riemann sum of over with partition is defined as where and . One might produce different Riemann sums depending on which 's are chosen. In the end this will not matter, if the function is Riemann integrable, when the difference or width of the summands approaches zero. Types of Riemann sums Specific choices of give different types of Riemann sums: If for all i, the method is the left rule and gives a left Riemann sum. If for all i, the method is the right rule and gives a right Riemann sum. If for all i, the method is the midpoint rule and gives a middle Riemann sum. If (that is, the supremum of over ), the method is the upper rule and gives an upper Riemann sum or upper Darboux sum. If (that is, the infimum of f over ), the method is the lower rule and gives a lower Riemann sum or lower Darboux sum. All these Riemann summation methods are among the most basic ways to accomplish numerical integration. Loosely speaking, a function is Riemann integrable if all Riemann sums converge as the partition "gets finer and finer". While not derived as a Riemann sum, taking the average of the left and right Riemann sums is the trapezoidal rule and gives a trapezoidal sum. It is one of the simplest of a very general way of approximating integrals using weighted averages. This is followed in complexity by Simpson's rule and Newton–Cotes formulas. Any Riemann sum on a given partition (that is, for any choice of between and ) is contained between the lower and upper Darboux sums. This forms the basis of the Darboux integral, which is ultimately equivalent to the Riemann integral. Riemann summation methods The four Riemann summation methods are usually best approached with subintervals of equal size. The interval is therefore divided into subintervals, each of length The points in the partition will then be Left rule For the left rule, the function is approximated by its values at the left endpoints of the subintervals. This gives multiple rectangles with base and height . Doing this for , and summing the resulting areas gives The left Riemann sum amounts to an overestimation if f is monotonically decreasing on this interval, and an underestimation if it is monotonically increasing. The error of this formula will be where is the maximum value of the absolute value of over the interval. Right rule For the right rule, the function is approximated by its values at the right endpoints of the subintervals. This gives multiple rectangles with base and height . Doing this for , and summing the resulting areas gives The right Riemann sum amounts to an underestimation if f is monotonically decreasing, and an overestimation if it is monotonically increasing. The error of this formula will be where is the maximum value of the absolute value of over the interval. Midpoint rule For the midpoint rule, the function is approximated by its values at the midpoints of the subintervals. This gives for the first subinterval, for the next one, and so on until . Summing the resulting areas gives The error of this formula will be where is the maximum value of the absolute value of over the interval. This error is half of that of the trapezoidal sum; as such the middle Riemann sum is the most accurate approach to the Riemann sum. Generalized midpoint rule A generalized midpoint rule formula, also known as the enhanced midpoint integration, is given by where denotes even derivative. For a function defined over interval , its integral is Therefore, we can apply this generalized midpoint integration formula by assuming that . This formula is particularly efficient for the numerical integration when the integrand is a highly oscillating function. Trapezoidal rule For the trapezoidal rule, the function is approximated by the average of its values at the left and right endpoints of the subintervals. Using the area formula for a trapezium with parallel sides and , and height , and summing the resulting areas gives The error of this formula will be where is the maximum value of the absolute value of . The approximation obtained with the trapezoidal sum for a function is the same as the average of the left hand and right hand sums of that function. Connection with integration For a one-dimensional Riemann sum over domain , as the maximum size of a subinterval shrinks to zero (that is the limit of the norm of the subintervals goes to zero), some functions will have all Riemann sums converge to the same value. This limiting value, if it exists, is defined as the definite Riemann integral of the function over the domain, For a finite-sized domain, if the maximum size of a subinterval shrinks to zero, this implies the number of subinterval goes to infinity. For finite partitions, Riemann sums are always approximations to the limiting value and this approximation gets better as the partition gets finer. The following animations help demonstrate how increasing the number of subintervals (while lowering the maximum subinterval size) better approximates the "area" under the curve: Since the red function here is assumed to be a smooth function, all three Riemann sums will converge to the same value as the number of subintervals goes to infinity. Example Taking an example, the area under the curve over [0, 2] can be procedurally computed using Riemann's method. The interval [0, 2] is firstly divided into subintervals, each of which is given a width of ; these are the widths of the Riemann rectangles (hereafter "boxes"). Because the right Riemann sum is to be used, the sequence of coordinates for the boxes will be . Therefore, the sequence of the heights of the boxes will be . It is an important fact that , and . The area of each box will be and therefore the nth right Riemann sum will be: If the limit is viewed as n → ∞, it can be concluded that the approximation approaches the actual value of the area under the curve as the number of boxes increases. Hence: This method agrees with the definite integral as calculated in more mechanical ways: Because the function is continuous and monotonically increasing over the interval, a right Riemann sum overestimates the integral by the largest amount (while a left Riemann sum would underestimate the integral by the largest amount). This fact, which is intuitively clear from the diagrams, shows how the nature of the function determines how accurate the integral is estimated. While simple, right and left Riemann sums are often less accurate than more advanced techniques of estimating an integral such as the Trapezoidal rule or Simpson's rule. The example function has an easy-to-find anti-derivative so estimating the integral by Riemann sums is mostly an academic exercise; however it must be remembered that not all functions have anti-derivatives so estimating their integrals by summation is practically important. Higher dimensions The basic idea behind a Riemann sum is to "break-up" the domain via a partition into pieces, multiply the "size" of each piece by some value the function takes on that piece, and sum all these products. This can be generalized to allow Riemann sums for functions over domains of more than one dimension. While intuitively, the process of partitioning the domain is easy to grasp, the technical details of how the domain may be partitioned get much more complicated than the one dimensional case and involves aspects of the geometrical shape of the domain. Two dimensions In two dimensions, the domain may be divided into a number of two-dimensional cells such that . Each cell then can be interpreted as having an "area" denoted by . The two-dimensional Riemann sum is where . Three dimensions In three dimensions, the domain is partitioned into a number of three-dimensional cells such that . Each cell then can be interpreted as having a "volume" denoted by . The three-dimensional Riemann sum is where . Arbitrary number of dimensions Higher dimensional Riemann sums follow a similar pattern. An n-dimensional Riemann sum is where , that is, it is a point in the n-dimensional cell with n-dimensional volume . Generalization In high generality, Riemann sums can be written where stands for any arbitrary point contained in the set and is a measure on the underlying set. Roughly speaking, a measure is a function that gives a "size" of a set, in this case the size of the set ; in one dimension this can often be interpreted as a length, in two dimensions as an area, in three dimensions as a volume, and so on. See also Antiderivative Euler method and midpoint method, related methods for solving differential equations Lebesgue integration Riemann integral, limit of Riemann sums as the partition becomes infinitely fine Simpson's rule, a powerful numerical method more powerful than basic Riemann sums or even the Trapezoidal rule Trapezoidal rule, numerical method based on the average of the left and right Riemann sum References External links A simulation showing the convergence of Riemann sums Integral calculus Bernhard Riemann
Riemann sum
[ "Mathematics" ]
2,229
[ "Integral calculus", "Calculus" ]
176,550
https://en.wikipedia.org/wiki/Standard%20molar%20entropy
In chemistry, the standard molar entropy is the entropy content of one mole of pure substance at a standard state of pressure and any temperature of interest. These are often (but not necessarily) chosen to be the standard temperature and pressure. The standard molar entropy at pressure = is usually given the symbol , and has units of joules per mole per kelvin (J⋅mol−1⋅K−1). Unlike standard enthalpies of formation, the value of is absolute. That is, an element in its standard state has a definite, nonzero value of at room temperature. The entropy of a pure crystalline structure can be 0J⋅mol−1⋅K−1 only at 0K, according to the third law of thermodynamics. However, this assumes that the material forms a 'perfect crystal' without any residual entropy. This can be due to crystallographic defects, dislocations, and/or incomplete rotational quenching within the solid, as originally pointed out by Linus Pauling. These contributions to the entropy are always present, because crystals always grow at a finite rate and at temperature. However, the residual entropy is often quite negligible and can be accounted for when it occurs using statistical mechanics. Thermodynamics If a mole of a solid substance is a perfectly ordered solid at 0K, then if the solid is warmed by its surroundings to 298.15K without melting, its absolute molar entropy would be the sum of a series of stepwise and reversible entropy changes. The limit of this sum as becomes an integral: In this example, and is the molar heat capacity at a constant pressure of the substance in the reversible process . The molar heat capacity is not constant during the experiment because it changes depending on the (increasing) temperature of the substance. Therefore, a table of values for is required to find the total molar entropy. The quantity represents the ratio of a very small exchange of heat energy to the temperature . The total molar entropy is the sum of many small changes in molar entropy, where each small change can be considered a reversible process. Chemistry The standard molar entropy of a gas at STP includes contributions from: The heat capacity of one mole of the solid from 0K to the melting point (including heat absorbed in any changes between different crystal structures). The latent heat of fusion of the solid. The heat capacity of the liquid from the melting point to the boiling point. The latent heat of vaporization of the liquid. The heat capacity of the gas from the boiling point to room temperature. Changes in entropy are associated with phase transitions and chemical reactions. Chemical equations make use of the standard molar entropy of reactants and products to find the standard entropy of reaction: The standard entropy of reaction helps determine whether the reaction will take place spontaneously. According to the second law of thermodynamics, a spontaneous reaction always results in an increase in total entropy of the system and its surroundings: Molar entropy is not the same for all gases. Under identical conditions, it is greater for a heavier gas. See also Entropy Heat Gibbs free energy Helmholtz free energy Standard state Third law of thermodynamics References External links Table of Standard Thermodynamic Properties for Selected Substances Chemical properties Thermodynamic entropy Molar quantities
Standard molar entropy
[ "Physics", "Chemistry" ]
684
[ "Physical quantities", "Intensive quantities", "Thermodynamic entropy", "Entropy", "nan", "Statistical mechanics", "Molar quantities" ]
10,844
https://en.wikipedia.org/wiki/French%20materialism
French materialism is the name given to a handful of French 18th-century philosophers during the Age of Enlightenment, many of them clustered around the salon of Baron d'Holbach. Although there are important differences between them, all of them were materialists who believed that the world was made up of a single substance, matter, the motions and properties of which could be used to explain all phenomena. Prominent French materialists of the 18th century include: Julien Offray de La Mettrie Denis Diderot Baron d'Holbach Claude Adrien Helvétius Pierre Jean Georges Cabanis Jacques-André Naigeon See also Atheism during the Age of Enlightenment German materialism Mechanism (philosophy) Metaphysical naturalism External links Marx's essay on French Materialism on WikiSource Materialism Philosophical schools and traditions French philosophy
French materialism
[ "Physics" ]
172
[ "Materialism", "Matter" ]
10,857
https://en.wikipedia.org/wiki/Stage%20%28stratigraphy%29
In chronostratigraphy, a stage is a succession of rock strata laid down in a single age on the geologic timescale, which usually represents millions of years of deposition. A given stage of rock and the corresponding age of time will by convention have the same name, and the same boundaries. Rock series are divided into stages, just as geological epochs are divided into ages. Stages are divided into smaller stratigraphic units called chronozones or substages, and added together into superstages. The term faunal stage is sometimes used, referring to the fact that the same fauna (animals) are found throughout the layer (by definition). Definition Stages are primarily defined by a consistent set of fossils (biostratigraphy) or a consistent magnetic polarity (see paleomagnetism) in the rock. Usually one or more index fossils that are common, found worldwide, easily recognized, and limited to a single, or at most a few, stages are used to define the stage's bottom. Thus, for example in the local North American subdivision, a paleontologist finding fragments of the trilobite Olenellus would identify the beds as being from the Waucoban Stage whereas fragments of a later trilobite such as Elrathia would identify the stage as Albertan. Stages were important in the 19th and early 20th centuries as they were the major tool available for dating and correlating rock units prior to the development of seismology and radioactive dating in the second half of the 20th century. Microscopic analysis of the rock (petrology) is also sometimes useful in confirming that a given segment of rock is from a particular age. Originally, faunal stages were only defined regionally. As additional stratigraphic and geochronologic tools were developed, they were defined over ever broader areas. More recently, the adjective "faunal" has been dropped as regional and global correlations of rock sequences have become relatively certain and there is less need for faunal labels to define the age of formations. A tendency developed to use European and, to a lesser extent, Asian stage names for the same time period worldwide, even though the faunas in other regions often had little in common with the stage as originally defined. International standardization Boundaries and names are established by the International Commission on Stratigraphy (ICS) of the International Union of Geological Sciences. As of 2008, the ICS is nearly finished with a task begun in 1974, subdividing the Phanerozoic eonothem into internationally accepted stages using two types of benchmark. For younger stages, a Global Boundary Stratotype Section and Point (GSSP), a physical outcrop clearly demonstrates the boundary. For older stages, a Global Standard Stratigraphic Age (GSSA) is an absolute date. The benchmarks will give a much greater certainty that results can be compared with confidence in the date determinations, and such results will have farther scope than any evaluation based solely on local knowledge and conditions. In many regions local subdivisions and classification criteria are still used along with the newer internationally coordinated uniform system, but once the research establishes a more complete international system, it is expected that local systems will be abandoned. Stages and lithostratigraphy Stages can include many lithostratigraphic units (for example formations, beds, members, etc.) of differing rock types that were being laid down in different environments at the same time. In the same way, a lithostratigraphic unit can include a number of stages or parts of them. See also European land mammal age Geologic record Geologic time scale North American land mammal age Type locality (geology) List of geochronologic names List of Global Boundary Stratotype Sections and Points Notes References Hedberg, H.D., (editor), International stratigraphic guide: A guide to stratigraphic classification, terminology, and procedure, New York, John Wiley and Sons, 1976 International Stratigraphic Chart from the International Commission on Stratigraphy External links The Global Boundary Stratotype Section and Point (GSSP): overview Chart of The Global Boundary Stratotype Sections and Points (GSSP): chart Geotime chart displaying geologic time periods compared to the fossil record, deals with chronology and classifications for laymen (not GSSPs) Chronostratigraphy . Geochronology Geologic time scales Geology terminology Geological units Paleogeography Paleobiology Units of time
Stage (stratigraphy)
[ "Physics", "Mathematics", "Biology" ]
915
[ "Physical quantities", "Time", "Units of time", "Quantity", "Paleobiology", "Spacetime", "Units of measurement" ]
10,859
https://en.wikipedia.org/wiki/Fields%20Medal
The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles Fields. The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics, although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria. According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics worldwide, and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most prestigious international award in mathematics. The prize includes a monetary award which, since 2006, has been 15,000. Fields was instrumental in establishing the award, designing the medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge. The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014, the Iranian mathematician Maryam Mirzakhani became the first female Fields Medalist. In total, 64 people have been awarded the Fields Medal. The most recent group of Fields Medalists received their awards on 5 July 2022 in an online event which was live-streamed from Helsinki, Finland. It was originally meant to be held in Saint Petersburg, Russia, but was moved following the 2022 Russian invasion of Ukraine. Conditions of the award The Fields Medal has for a long time been regarded as the most prestigious award in the field of mathematics and is often described as the Nobel Prize of Mathematics. Unlike the Nobel Prize, the Fields Medal is only awarded every four years. The Fields Medal also has an age limit: a recipient must be under age 40 on 1 January of the year in which the medal is awarded. The under-40 rule is based on Fields's desire that "while it was in recognition of work already done, it was at the same time intended to be an encouragement for further achievement on the part of the recipients and a stimulus to renewed effort on the part of others." Moreover, an individual can only be awarded one Fields Medal; winners are ineligible to be awarded future medals. First awarded in 1936, 64 people have won the medal as of 2022. With the exception of two PhD holders in physics (Edward Witten and Martin Hairer), only people with a PhD in mathematics have won the medal. List of Fields medalists In certain years, the Fields medalists have been officially cited for particular mathematical achievements, while in other years such specificities have not been given. However, in every year that the medal has been awarded, noted mathematicians have lectured at the International Congress of Mathematicians on each medalist's body of work. In the following table, official citations are quoted when possible (namely for the years 1958, 1998, and every year since 2006). For the other years through 1986, summaries of the ICM lectures, as written by Donald Albers, Gerald L. Alexanderson, and Constance Reid, are quoted. In the remaining years (1990, 1994, and 2002), part of the text of the ICM lecture itself has been quoted. The upcoming awarding of the Fields Medal at the 2026 International Congress of the International Mathematical Union is planned to take place in Philadelphia. Landmarks The medal was first awarded in 1936 to the Finnish mathematician Lars Ahlfors and the American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 1954, Jean-Pierre Serre became the youngest winner of the Fields Medal, at 27. He retains this distinction. In 1966, Alexander Grothendieck boycotted the ICM, held in Moscow, to protest Soviet military actions taking place in Eastern Europe. Léon Motchane, founder and director of the Institut des Hautes Études Scientifiques, attended and accepted Grothendieck's Fields Medal on his behalf. In 1970, Sergei Novikov, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Nice to receive his medal. In 1978, Grigory Margulis, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Helsinki to receive his medal. The award was accepted on his behalf by Jacques Tits, who said in his address: "I cannot but express my deep disappointment—no doubt shared by many people here—in the absence of Margulis from this ceremony. In view of the symbolic meaning of this city of Helsinki, I had indeed grounds to hope that I would have a chance at last to meet a mathematician whom I know only through his work and for whom I have the greatest respect and admiration." In 1982, the congress was due to be held in Warsaw but had to be rescheduled to the next year, because of martial law introduced in Poland on 13 December 1981. The awards were announced at the ninth General Assembly of the IMU earlier in the year and awarded at the 1983 Warsaw congress. In 1990, Edward Witten became the first physicist to win the award. In 1998, at the ICM, Andrew Wiles was presented by the chair of the Fields Medal Committee, Yuri I. Manin, with the first-ever IMU silver plaque in recognition of his proof of Fermat's Last Theorem. Don Zagier referred to the plaque as a "quantized Fields Medal". Accounts of this award frequently make reference that at the time of the award Wiles was over the age limit for the Fields medal. Although Wiles was slightly over the age limit in 1994, he was thought to be a favorite to win the medal; however, a gap (later resolved by Taylor and Wiles) in the proof was found in 1993. In 2006, Grigori Perelman, who proved the Poincaré conjecture, refused his Fields Medal and did not attend the congress. In 2014, Maryam Mirzakhani became the first Iranian as well as the first woman to win the Fields Medal, and Artur Avila became the first South American and Manjul Bhargava became the first person of Indian origin to do so. In 2022, Maryna Viazovska became the first Ukrainian to win the Fields Medal, and June Huh became the first person of Korean ancestry to do so. Medal The medal was designed by Canadian sculptor R. Tait McKenzie. It is made of 14KT gold, has a diameter of 63.5mm, and weighs 169g. On the obverse is Archimedes and a quote attributed to 1st century AD poet Manilius, which reads in Latin: ("To surpass one's understanding and master the world"). The year number 1933 is written in Roman numerals and contains an error (MCNXXXIII rather than MCMXXXIII). In capital Greek letters the word Ἀρχιμηδους, or "of Archimedes," is inscribed. On the reverse is the inscription: Translation: "Mathematicians gathered from the entire world have awarded [understood but not written: 'this prize'] for outstanding writings." In the background, there is the representation of Archimedes' tomb, with the carving illustrating his theorem On the Sphere and Cylinder, behind an olive branch. (This is the mathematical result of which Archimedes was reportedly most proud: Given a sphere and a circumscribed cylinder of the same height and diameter, the ratio between their volumes is equal to .) The rim bears the name of the prizewinner. Female recipients The Fields Medal has had two female recipients, Maryam Mirzakhani from Iran in 2014, and Maryna Viazovska from Ukraine in 2022. In popular culture The Fields Medal gained some recognition in popular culture due to references in the 1997 film, Good Will Hunting. In the movie, Gerald Lambeau (Stellan Skarsgård) is an MIT professor who won the award prior to the events of the story. Throughout the film, references made to the award are meant to convey its prestige in the field. See also Abel Prize Kyoto Prize List of prizes known as the Nobel or the highest honors of a field List of mathematics awards Nevanlinna Prize Rolf Schock Prizes Turing Award Wolf Prize in Mathematics Notes References Further reading . External links Overview at britannica.com Awards established in 1936 Awards with age limits Mathematics awards
Fields Medal
[ "Technology" ]
1,804
[ "Science and technology awards", "Mathematics awards" ]
10,869
https://en.wikipedia.org/wiki/Frequentist%20probability
Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in infinitely many trials (the long-run probability). Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, for example, the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning. Definition In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: It occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation. A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies. Scope The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages. As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on -values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result. As Feller notes: History The frequentist view may have been foreshadowed by Aristotle, in Rhetoric, when he wrote: Poisson (1837) clearly distinguished between objective and subjective probabilities. Soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis (1843) and Ellis (1854), Cournot (1843), and Fries introduced the frequentist view. Venn (1866, 1876, 1888) provided a thorough exposition two decades later. These were further supported by the publications of Boole and Bertrand. By the end of the 19th century the frequentist interpretation was well established and perhaps dominant in the sciences. The following generation established the tools of classical inferential statistics (significance testing, hypothesis testing and confidence intervals) all based on frequentist probability. Alternatively, Bernoulli understood the concept of frequentist probability and published a critical proof (the weak law of large numbers) posthumously (Bernoulli, 1713). He is also credited with some appreciation for subjective probability (prior to and without Bayes theorem). Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson. Laplace considered the probabilities of testimonies, tables of mortality, judgments of tribunals, etc. which are unlikely candidates for classical probability. In this view, Poisson's contribution was his sharp criticism of the alternative "inverse" (subjective, Bayesian) probability interpretation. Any criticism by Gauss or Laplace was muted and implicit. (However, note that their later derivations of least squares did not use inverse probability.) Major contributors to "classical" statistics in the early 20th century included Fisher, Neyman, and Pearson. Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of "repeated sampling from the same population"; Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing. All valued objectivity, so the best interpretation of probability available to them was frequentist. All were suspicious of "inverse probability" (the available alternative) with prior probabilities chosen by using the principle of indifference. Fisher said, "... the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected." While Neyman was a pure frequentist, Fisher's views of probability were unique: Both Fisher and Neyman had nuanced view of probability. von Mises offered a combination of mathematical and philosophical support for frequentism in the era. Etymology According to the Oxford English Dictionary, the term frequentist was first used by M.G. Kendall in 1949, to contrast with Bayesians, whom he called non-frequentists. Kendall observed 3. ... we may broadly distinguish two main attitudes. One takes probability as 'a degree of rational belief', or some similar idea...the second defines probability in terms of frequencies of occurrence of events, or by relative proportions in 'populations' or 'collectives'; ... 12. It might be thought that the differences between the frequentists and the non-frequentists (if I may call them such) are largely due to the differences of the domains which they purport to cover. ... I assert that this is not so ... The essential distinction between the frequentists and the non-frequentists is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not. [emphasis in original] "The Frequency Theory of Probability" was used a generation earlier as a chapter title in Keynes (1921). The historical sequence: Probability concepts were introduced and much of the mathematics of probability derived (prior to the 20th century) classical statistical inference methods were developed the mathematical foundations of probability were solidified and current terminology was introduced (all in the 20th century). The primary historical sources in probability and statistics did not use the current terminology of classical, subjective (Bayesian), and frequentist probability. Alternative views Probability theory is a branch of mathematics. While its roots reach centuries into the past, it reached maturity with the axioms of Andrey Kolmogorov in 1933. The theory focuses on the valid operations on probability values rather than on the initial assignment of values; the mathematics is largely independent of any interpretation of probability. Applications and interpretations of probability are considered by philosophy, the sciences and statistics. All are interested in the extraction of knowledge from observations—inductive reasoning. There are a variety of competing interpretations; All have problems. The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book. Classical probability assigns probabilities based on physical idealized symmetry (dice, coins, cards). The classical definition is at risk of circularity: Probabilities are defined by assuming equality of probabilities. In the absence of symmetry the utility of the definition is limited. Subjective (Bayesian) probability (a family of competing interpretations) considers degrees of belief: All practical "subjective" probability interpretations are so constrained to rationality as to avoid most subjectivity. Real subjectivity is repellent to some definitions of science which strive for results independent of the observer and analyst. Other applications of Bayesianism in science (e.g. logical Bayesianism) embrace the inherent subjectivity of many scientific studies and objects and use Bayesian reasoning to place boundaries and context on the influence of subjectivities on all analysis. The historical roots of this concept extended to such non-numeric applications as legal evidence. Propensity probability views probability as a causative phenomenon rather than a purely descriptive or subjective one. Footnotes Citations References Probability interpretations
Frequentist probability
[ "Mathematics" ]
1,794
[ "Probability interpretations" ]
10,890
https://en.wikipedia.org/wiki/Fundamental%20interaction
In physics, the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity electromagnetism weak interaction strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field. The gravitational interaction is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have lead to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything. History Classical theory In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) Standard Model The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support. Overview of the fundamental interactions In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ± (intrinsic angular momentum ±, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons. The interaction of any pair of fermions in perturbation theory can then be modelled thus: Two fermions go in → interaction by boson exchange → two changed fermions go out. The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from + to − (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Electric and magnetic force into electromagnetism; The electromagnetic interaction and the weak interaction into the electroweak interaction; see below. Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. Interactions Gravity Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it retards the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime. Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton. Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. Electroweak interaction Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. Electromagnetism Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements. In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter ) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. Weak interaction The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT. Strong interaction The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: The pions were understood to be oscillations of vacuum condensates; Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge; Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings. While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. Higgs interaction Conventionally, the Higgs interaction is not counted among the four fundamental forces. Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form , with Yukawa coupling , particle mass (in eV), and Higgs vacuum expectation value . Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form , with Higgs mass . Because the reduced Compton wavelength of the Higgs boson is so small (, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances. Beyond the Standard Model The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale. Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything, which would fintegrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory, loop quantum gravity, and twistor theory, have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow. See also Quintessence, a hypothesized fifth force Gerardus 't Hooft Edward Witten Howard Georgi References Bibliography 2nd ed. While all interactions are discussed, discussion is especially thorough on the weak. Physical phenomena
Fundamental interaction
[ "Physics" ]
5,167
[ "Physical phenomena", "Force", "Physical quantities", "Fundamental interactions", "Particle physics" ]
10,896
https://en.wikipedia.org/wiki/Felix%20Bloch
Felix Bloch (; ; 23 October 1905 – 10 September 1983) was a Swiss-American physicist and Nobel physics laureate who worked mainly in the U.S. He and Edward Mills Purcell were awarded the 1952 Nobel Prize for Physics for "their development of new ways and methods for nuclear magnetic precision measurements." In 1954–1955, he served for one year as the first director-general of CERN. Felix Bloch made fundamental theoretical contributions to the understanding of ferromagnetism and electron behavior in crystal lattices. He is also considered one of the developers of nuclear magnetic resonance. Biography Early life, education, and family Bloch was born in Zürich, Switzerland to Jewish parents Gustav and Agnes Bloch. Gustav Bloch, his father, was financially unable to attend University and worked as a wholesale grain dealer in Zürich. Gustav moved to Zürich from Moravia in 1890 to become a Swiss citizen. Their first child was a girl born in 1902 while Felix was born three years later. Bloch entered public elementary school at the age of six and is said to have been teased, in part because he "spoke Swiss German with a somewhat different accent than most members of the class". He received support from his older sister during much of this time, but she died at the age of twelve, devastating Felix, who is said to have lived a "depressed and isolated life" in the following years. Bloch learned to play the piano by the age of eight and was drawn to arithmetic for its "clarity and beauty". Bloch graduated from elementary school at twelve and enrolled in the Cantonal Gymnasium in Zürich for secondary school in 1918. He was placed on a six-year curriculum here to prepare him for University. He continued his curriculum through 1924, even through his study of engineering and physics in other schools, though it was limited to mathematics and languages after the first three years. After these first three years at the Gymnasium, at age fifteen Bloch began to study at the Eidgenössische Technische Hochschule (ETHZ), also in Zürich. Although he initially studied engineering he soon changed to physics. During this time he attended lectures and seminars given by Peter Debye and Hermann Weyl at ETH Zürich and Erwin Schrödinger at the neighboring University of Zürich. A fellow student in these seminars was John von Neumann. Bloch graduated in 1927, and was encouraged by Debye to go to Leipzig to study with Werner Heisenberg. Bloch became Heisenberg's first graduate student, and gained his doctorate in 1928. His doctoral thesis established the quantum theory of solids, using waves to describe electrons in periodic lattices. On March 14, 1940, Bloch married Lore Clara Misch (1911–1996), a fellow physicist working on X-ray crystallography, whom he had met at an American Physical Society meeting. They had four children, twins George Jacob Bloch and Daniel Arthur Bloch (born January 15, 1941), son Frank Samuel Bloch (born January 16, 1945), and daughter Ruth Hedy Bloch (born September 15, 1949). Career Bloch remained in European academia, working on superconductivity with Wolfgang Pauli in Zürich; with Hans Kramers and Adriaan Fokker in Holland; with Heisenberg on ferromagnetism, where he developed a description of boundaries between magnetic domains, now known as "Bloch walls", and theoretically proposed a concept of spin waves, excitations of magnetic structure; with Niels Bohr in Copenhagen, where he worked on a theoretical description of the stopping of charged particles traveling through matter; and with Enrico Fermi in Rome. In 1932, Bloch returned to Leipzig to assume a position as "Privatdozent" (lecturer). In 1933, immediately after Hitler came to power, he left Germany because he was Jewish, returning to Zürich, before traveling to Paris to lecture at the Institut Henri Poincaré. In 1934, the chairman of Stanford Physics invited Bloch to join the faculty. Bloch accepted the offer and emigrated to the United States. In the fall of 1938, Bloch began working with the 37 inch cyclotron at the University of California, Berkeley to determine the magnetic moment of the neutron. Bloch went on to become the first professor for theoretical physics at Stanford. In 1939, he became a naturalized citizen of the United States. During WWII, Bloch briefly worked on the atomic bomb project at Los Alamos. Disliking the military atmosphere of the laboratory and uninterested in the theoretical work there, Bloch left to join the radar project at Harvard University. After the war, he concentrated on investigations into nuclear induction and nuclear magnetic resonance, which are the underlying principles of MRI. In 1946 he proposed the Bloch equations which determine the time evolution of nuclear magnetization. He was elected to the United States National Academy of Sciences in 1948. Along with Edward Purcell, Bloch was awarded the 1952 Nobel Prize in Physics for his work on nuclear magnetic induction. When CERN was being set up in the early 1950s, its founders were searching for someone of stature and international prestige to head the fledgling international laboratory, and in 1954 Professor Bloch became CERN's first director-general, at the time when construction was getting under way on the present Meyrin site and plans for the first machines were being drawn up. After leaving CERN, he returned to Stanford University, where he in 1961 was made Max Stein Professor of Physics. In 1964, he was elected a foreign member of the Royal Netherlands Academy of Arts and Sciences. He was also a member of the American Academy of Arts and Sciences and the American Philosophical Society. Bloch died in Zürich in 1983. See also List of Jewish Nobel laureates List of things named after Felix Bloch Footnotes References Further reading Bloch, F.; Staub, H. "Fission Spectrum", Los Alamos National Laboratory (LANL) (through predecessor agency Los Alamos Scientific Lab), United States Department of Energy (through predecessor agency the US Atomic Energy Commission), (August 18, 1943). External links Oral History interview transcript with Felix Bloch on 14 May 1964, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Thomas S. Kuhn in Palo Alto, California Oral History interview transcript with Felix Bloch on 15 August 1968, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Charles Weiner at Stanford University Oral History interview transcript with Felix Bloch 15 December 1981, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Lillian Hoddeson at Stanford University Felix Bloch Papers, 1931–1987 (33 linear ft.) are housed in the Department of Special Collections and University Archives at Stanford University Libraries National Academy of Sciences Biographical Memoir Felix Bloch Papers 1905 births 1983 deaths Nobel laureates in Physics Swiss Nobel laureates American Nobel laureates 20th-century American physicists American people of Swiss-Jewish descent Naturalized citizens of the United States People associated with CERN ETH Zurich alumni Experimental physicists Harvard University people Jewish American scientists Jewish American physicists Leipzig University alumni Manhattan Project people Members of the Royal Netherlands Academy of Arts and Sciences Members of the United States National Academy of Sciences Fellows of the American Physical Society Recipients of the Pour le Mérite (civil class) Stanford University Department of Physics faculty 20th-century Swiss Jews Swiss physicists Swiss emigrants to the United States Nuclear magnetic resonance Scientists from Zurich Members of the American Philosophical Society Presidents of the American Physical Society
Felix Bloch
[ "Physics", "Chemistry" ]
1,542
[ "Experimental physics", "Nuclear magnetic resonance", "Experimental physicists", "Nuclear physics" ]
10,902
https://en.wikipedia.org/wiki/Force
A force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol . Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium, these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium, they can cause deformation of solid materials or flow in fluids. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes. Development of the concept Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years. By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Pre-Newtonian concepts Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion. Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics. In the early 17th century, before Newton's Principia, the term "force" () was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named (live force) by Leibniz. The modern concept of force corresponds to Newton's (accelerating force). Newtonian mechanics Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches. First law Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so. Second law According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object. A modern statement of Newton's second law is a vector equation: where is the momentum of the system, and is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time. In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum, where m is the mass and is the velocity. If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes By substituting the definition of acceleration, the algebraic version of Newton's second law is derived: Third law Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if is the force of body 1 on body 2 and that of body 2 on body 1, then This law is sometimes referred to as the action-reaction law, with called the action and the reaction. Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system. Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if is the momentum of object 1 and the momentum of object 2, then Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained. Defining "force" Some textbooks use Newton's second law as a definition of force. However, for the equation for a constant mass to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll. Combining forces Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous. Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action. Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two. Equilibrium When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa. Static Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them. The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration. Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion. Dynamic Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity. Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion. Examples of forces in classical mechanics Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body. Gravitational force or Gravity What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of will experience a force: For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward. Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion. Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass () and the radius () of the Earth to the gravitational acceleration: where the vector direction is given by , is the unit vector directed outward from the center of the Earth. In this equation, a dimensional constant is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass due to the gravitational pull of mass is where is the distance between the two objects' centers of mass and is the unit vector pointed in the direction away from the center of the first object toward the center of the second object. This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed. Electromagnetic The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement. Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as where is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge due to electric and magnetic fields: where is the electromagnetic force, is the electric field at the body's location, is the magnetic field, and is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field. The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum. Normal When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface. Friction Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction. The static friction force () will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction () multiplied by the normal force (). In other words, the magnitude of the static friction force satisfies the inequality: The kinetic friction force () is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: where is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction. Tension Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine. Spring A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If is the displacement, the force exerted by an ideal spring equals: where is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load. Centripetal For an object in uniform circular motion, the net force acting on the object equals: where is the mass of the object, is the velocity of the object and is the distance to the center of the circular path and is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction. Continuum mechanics Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: where is the volume of the object in the fluid and is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight. A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: where: is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and is the velocity of the object. More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as where is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions. Fictitious There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating. Because these forces are not genuine they are also referred to as "pseudo forces". In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. Concepts derived from force Rotation and torque Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force is defined relative to an arbitrary reference point as the cross product: where is the position vector of the force application point relative to the reference point. Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: where is the moment of inertia of the body is the angular acceleration of the body. This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation. Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque: where is the angular momentum of the particle. Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. Yank The yank is defined as the rate of change of force The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used. Kinematic integrals Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse: which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force: which is equivalent to changes in kinetic energy (yielding the work energy theorem). Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change in a time interval dt: so with the velocity. Potential energy Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field is defined as that field whose gradient is equal and opposite to the force produced at every point: Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not. Conservation A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area. Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector emanating from spherically symmetric potentials. Examples of this follow: For gravity: where is the gravitational constant, and is the mass of object n. For electrostatic forces: where is electric permittivity of free space, and is the electric charge of object n. For spring forces: where is the spring constant. For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials. The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases. Units The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. See also Ton-force. Revisions of the force concept At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly. Special theory of relativity In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law, remains valid because it is a mathematical definition. But for momentum to be conserved at relativistic relative velocity, , momentum must be redefined as: where is the rest mass and the speed of light. The expression relating force and acceleration for a particle with constant non-zero rest mass moving in the direction at velocity is: where is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach . If is very small compared to , then is very close to 1 and is a close approximation. Even for use in relativity, one can restore the form of through the use of four-vectors. This relation is correct in relativity when is the four-force, is the invariant mass, and is the four-acceleration. The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below. Quantum mechanics Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability. Quantum field theory In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions". While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force. Fundamental interactions All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference. The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation. Gravitational Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact. Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force". Electromagnetic Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (or QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force. Strong nuclear There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons. The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement. Weak nuclear Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately . Such temperatures occurred in the plasma collisions in the early moments of the Big Bang. See also References External links Natural philosophy Classical mechanics Vector physical quantities Temporal rates
Force
[ "Physics", "Mathematics" ]
10,093
[ "Temporal quantities", "Force", "Physical quantities", "Quantity", "Temporal rates", "Mass", "Classical mechanics", "Mechanics", "Vector physical quantities", "Wikipedia categories named after physical quantities", "Matter" ]
10,911
https://en.wikipedia.org/wiki/Functional%20group
In organic chemistry, a functional group is a substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis. A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group () and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment. Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has. Table of common functional groups The following is a list of common functional groups. In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms. Hydrocarbons Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity. There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl, bornyl, cyclohexyl, etc. There are several functional groups that contain an alkene such as vinyl group, allyl group, or acrylic group. Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions. Carbocations are often named -um. Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion. Groups containing halogen Haloalkanes are a class of molecule that is defined by a carbon–halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions. The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity. Groups containing oxygen Compounds that contain C-O bonds each possess differing reactivity based upon the location and hybridization of the C-O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp2-hybridized oxygen (alcohol groups). Groups containing nitrogen Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides. Groups containing sulfur Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones. Groups containing phosphorus Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table. Groups containing boron Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids. Groups containing metals Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead. Names of radicals or moieties These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules. When the parent hydrocarbon is unsaturated, the suffix ("-yl", "-ylidene", or "-ylidyne") replaces "-ane" (e.g. "ethane" becomes "ethyl"); otherwise, the suffix replaces only the final "-e" (e.g. "ethyne" becomes "ethynyl"). When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylidene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds). There are some retained names, such as methylene for methanediyl, 1,x-phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), carbyne for methylidyne, and trityl for triphenylmethyl. See also :Category:Functional groups Group contribution method References External links IUPAC Blue Book (organic nomenclature) Functional group video Organic chemistry
Functional group
[ "Chemistry" ]
1,538
[ "Functional groups", "nan" ]
10,913
https://en.wikipedia.org/wiki/Fractal
In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory. One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension). Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line. Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century. There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole." Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants". The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, and architecture. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction). Etymology The term "fractal" was coined by the mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin , meaning "broken" or "fractured", and used it to extend the concept of theoretical fractional dimensions to geometric patterns in nature. Introduction The word "fractal" often has different connotations for mathematicians and the general public, where the public is more likely to be familiar with fractal art than the mathematical concept. The mathematical concept is difficult to define formally, even for mathematicians, but key features can be understood with a little mathematical background. The feature of "self-similarity", for instance, is easily understood by analogy to zooming in with a lens or other device that zooms in on digital images to uncover finer, previously invisible, new structure. If this is done on fractals, however, no new detail appears; nothing changes and the same pattern repeats over and over, or for some fractals, nearly the same pattern reappears over and over. Self-similarity itself is not necessarily counter-intuitive (e.g., people have pondered self-similarity informally such as in the infinite regress in parallel mirrors or the homunculus, the little man inside the head of the little man inside the head ...). The difference for fractals is that the pattern reproduced must be detailed. This idea of being detailed relates to another feature that can be understood without much mathematical background: Having a fractal dimension greater than its topological dimension, for instance, refers to how a fractal scales compared to how geometric shapes are usually perceived. A straight line, for instance, is conventionally understood to be one-dimensional; if such a figure is rep-tiled into pieces each 1/3 the length of the original, then there are always three equal pieces. A solid square is understood to be two-dimensional; if such a figure is rep-tiled into pieces each scaled down by a factor of 1/3 in both dimensions, there are a total of 32 = 9 pieces. We see that for ordinary self-similar objects, being n-dimensional means that when it is rep-tiled into pieces each scaled down by a scale-factor of 1/r, there are a total of rn pieces. Now, consider the Koch curve. It can be rep-tiled into four sub-copies, each scaled down by a scale-factor of 1/3. So, strictly by analogy, we can consider the "dimension" of the Koch curve as being the unique real number D that satisfies 3D = 4. This number is called the fractal dimension of the Koch curve; it is not the conventionally perceived dimension of a curve. In general, a key property of fractals is that the fractal dimension differs from the conventionally understood dimension (formally called the topological dimension). This also leads to understanding a third feature, that fractals as mathematical equations are "nowhere differentiable". In a concrete sense, this means fractals cannot be measured in traditional ways. To elaborate, in trying to find the length of a wavy non-fractal curve, one could find straight segments of some measuring tool small enough to lay end to end over the waves, where the pieces could get small enough to be considered to conform to the curve in the normal manner of measuring with a tape measure. But in measuring an infinitely "wiggly" fractal curve such as the Koch snowflake, one would never find a small enough straight segment to conform to the curve, because the jagged pattern would always re-appear, at arbitrarily small scales, essentially pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter and tighter to the curve. The result is that one must need infinite tape to perfectly cover the entire curve, i.e. the snowflake has an infinite perimeter. History The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, with several notable people contributing canonical fractal forms along the way. A common theme in traditional African architecture is the use of fractal scaling, whereby small parts of the structure tend to look similar to larger parts, such as a circular village made of circular houses. According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense). In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". Thus, it was not until two centuries had passed that on July 18, 1872 Karl Weierstrass presented the first definition of a function with a graph that would today be considered a fractal, having the non-intuitive property of being everywhere continuous but nowhere differentiable at the Royal Prussian Academy of Sciences. In addition, the quotient difference becomes arbitrarily large as the summation index increases. Not long after that, in 1883, Georg Cantor, who attended lectures by Weierstrass, published examples of subsets of the real line known as Cantor sets, which had unusual properties and are now recognized as fractals. Also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called "self-inverse" fractals. One of the next milestones came in 1904, when Helge von Koch, extending ideas of Poincaré and dissatisfied with Weierstrass's abstract and analytic definition, gave a more geometric definition including hand-drawn images of a similar function, which is now called the Koch snowflake. Another milestone came a decade later in 1915, when Wacław Sierpiński constructed his famous triangle then, one year later, his carpet. By 1918, two French mathematicians, Pierre Fatou and Gaston Julia, though working independently, arrived essentially simultaneously at results describing what is now seen as fractal behaviour associated with mapping complex numbers and iterative functions and leading to further ideas about attractors and repellors (i.e., points that attract or repel other points), which have become very important in the study of fractals. Very shortly after that work was submitted, by March 1918, Felix Hausdorff expanded the definition of "dimension", significantly for the evolution of the definition of fractals, to allow for sets to have non-integer dimensions. The idea of self-similar curves was taken further by Paul Lévy, who, in his 1938 paper Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole, described a new fractal curve, the Lévy C curve. Different researchers have postulated that without the aid of modern computer graphics, early investigators were limited to what they could depict in manual drawings, so lacked the means to visualize the beauty and appreciate some of the implications of many of the patterns they had discovered (the Julia set, for instance, could only be visualized through a few iterations as very simple drawings). That changed, however, in the 1960s, when Benoit Mandelbrot started writing about self-similarity in papers such as How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension, which built on earlier work by Lewis Fry Richardson. In 1975, Mandelbrot solidified hundreds of years of thought and mathematical development in coining the word "fractal" and illustrated his mathematical definition with striking computer-constructed visualizations. These images, such as of his canonical Mandelbrot set, captured the popular imagination; many of them were based on recursion, leading to the popular meaning of the term "fractal". In 1980, Loren Carpenter gave a presentation at the SIGGRAPH where he introduced his software for generating and rendering fractally generated landscapes. Definition and characteristics One often cited description that Mandelbrot published to describe geometric fractals is "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole"; this is generally helpful but limited. Authors disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of self-similarity and the unusual relationship fractals have with the space they are embedded in. One point agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how to construct particular fractal patterns. In 1975 when Mandelbrot coined the word "fractal", he did so to denote an object whose Hausdorff–Besicovitch dimension is greater than its topological dimension. However, this requirement is not met by space-filling curves such as the Hilbert curve. Because of the trouble involved in finding one definition for fractals, some argue that fractals should not be strictly defined at all. According to Falconer, fractals should be only generally characterized by a gestalt of the following features; Self-similarity, which may include: Exact self-similarity: identical at all scales, such as the Koch snowflake Quasi self-similarity: approximates the same pattern at different scales; may contain small copies of the entire fractal in distorted and degenerate forms; e.g., the Mandelbrot set's satellites are approximations of the entire set, but not exact copies. Statistical self-similarity: repeats a pattern stochastically so numerical or statistical measures are preserved across scales; e.g., randomly generated fractals like the well-known example of the coastline of Britain for which one would not expect to find a segment scaled and repeated as neatly as the repeated unit that defines fractals like the Koch snowflake. Qualitative self-similarity: as in a time series Multifractal scaling: characterized by more than one fractal dimension or scaling rule Fine or detailed structure at arbitrarily small scales. A consequence of this structure is fractals may have emergent properties (related to the next criterion in this list). Irregularity locally and globally that cannot easily be described in the language of traditional Euclidean geometry other than as the limit of a recursively defined sequence of stages. For images of fractal patterns, this has been expressed by phrases such as "smoothly piling up surfaces" and "swirls upon swirls";see Common techniques for generating fractals. As a group, these criteria form guidelines for excluding certain cases, such as those that may be self-similar without having other typically fractal features. A straight line, for instance, is self-similar but not fractal because it lacks detail, and is easily described in Euclidean language without a need for recursion. Common techniques for generating fractals Images of fractals can be created by fractal generating programs. Because of the butterfly effect, a small change in a single variable can have an unpredictable outcome. Iterated function systems (IFS) – use fixed geometric replacement rules; may be stochastic or deterministic; e.g., Koch snowflake, Cantor set, Haferman carpet, Sierpinski carpet, Sierpinski gasket, Peano curve, Harter-Heighway dragon curve, T-square, Menger sponge Strange attractors – use iterations of a map or solutions of a system of initial-value differential or difference equations that exhibit chaos (e.g., see multifractal image, or the logistic map) L-systems – use string rewriting; may resemble branching patterns, such as in plants, biological cells (e.g., neurons and immune system cells), blood vessels, pulmonary structure, etc. or turtle graphics patterns such as space-filling curves and tilings Escape-time fractals – use a formula or recurrence relation at each point in a space (such as the complex plane); usually quasi-self-similar; also known as "orbit" fractals; e.g., the Mandelbrot set, Julia set, Burning Ship fractal, Nova fractal and Lyapunov fractal. The 2d vector fields that are generated by one or two iterations of escape-time formulae also give rise to a fractal form when points (or pixel data) are passed through this field repeatedly. Random fractals – use stochastic rules; e.g., Lévy flight, percolation clusters, self avoiding walks, fractal landscapes, trajectories of Brownian motion and the Brownian tree (i.e., dendritic fractals generated by modeling diffusion-limited aggregation or reaction-limited aggregation clusters). Finite subdivision rules – use a recursive topological algorithm for refining tilings and they are similar to the process of cell division. The iterative processes used in creating the Cantor set and the Sierpinski carpet are examples of finite subdivision rules, as is barycentric subdivision. Applications Simulated fractals Fractal patterns have been modeled extensively, albeit within a range of scales rather than infinitely, owing to the practical limits of physical time and space. Models may simulate theoretical fractals or natural phenomena with fractal features. The outputs of the modelling process may be highly artistic renderings, outputs for investigation, or benchmarks for fractal analysis. Some specific applications of fractals to technology are listed elsewhere. Images and other outputs of modelling are normally referred to as being "fractals" even if they do not have strictly fractal characteristics, such as when it is possible to zoom into a region of the fractal image that does not exhibit any fractal properties. Also, these may include calculation or display artifacts which are not characteristics of true fractals. Modeled fractals may be sounds, digital images, electrochemical patterns, circadian rhythms, etc. Fractal patterns have been reconstructed in physical 3-dimensional space and virtually, often called "in silico" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques. The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects, such as coastlines and mountains. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms. Natural phenomena with fractal features Approximate fractals found in nature display self-similarity over extended, but finite, scale ranges. The connection between fractals and leaves, for instance, is currently being used to determine how much carbon is contained in trees. Phenomena known to have fractal features include: Actin cytoskeleton Algae Animal coloration patterns Blood vessels and pulmonary vessels Brownian motion (generated by a one-dimensional Wiener process). Clouds and rainfall areas Coastlines Craters Crystals DNA Dust grains Earthquakes Fault lines Geometrical optics Heart rates Heart sounds Lake shorelines and areas Lightning bolts Mountain-goat horns Neurons Polymers Percolation Mountain ranges Ocean waves Pineapple Proteins Psychedelic Experience Purkinje cells Rings of Saturn River networks Romanesco broccoli Snowflakes Soil pores Surfaces in turbulent flows Trees Fractals in cell biology Fractals often appear in the realm of living organisms where they arise through branching processes and other complex pattern formation. Ian Wong and co-workers have shown that migrating cells can form fractals by clustering and branching. Nerve cells function through processes at the cell surface, with phenomena that are enhanced by largely increasing the surface to volume ratio. As a consequence nerve cells often are found to form into fractal patterns. These processes are crucial in cell physiology and different pathologies. Multiple subcellular structures also are found to assemble into fractals. Diego Krapf has shown that through branching processes the actin filaments in human cells assemble into fractal patterns. Similarly Matthias Weiss showed that the endoplasmic reticulum displays fractal features. The current understanding is that fractals are ubiquitous in cell biology, from proteins, to organelles, to whole cells. In creative works Since 1999 numerous scientific groups have performed fractal analysis on over 50 paintings created by Jackson Pollock by pouring paint directly onto horizontal canvasses. Recently, fractal analysis has been used to achieve a 93% success rate in distinguishing real from imitation Pollocks. Cognitive neuroscientists have shown that Pollock's fractals induce the same stress-reduction in observers as computer-generated fractals and Nature's fractals. Decalcomania, a technique used by artists such as Max Ernst, can produce fractal-like patterns. It involves pressing paint between two surfaces and pulling them apart. Cyberneticist Ron Eglash has suggested that fractal geometry and mathematics are prevalent in African art, games, divination, trade, and architecture. Circular houses appear in circles of circles, rectangular houses in rectangles of rectangles, and so on. Such scaling patterns can also be found in African textiles, sculpture, and even cornrow hairstyles. Hokky Situngkir also suggested the similar properties in Indonesian traditional art, batik, and ornaments found in traditional houses. Ethnomathematician Ron Eglash has discussed the planned layout of Benin city using fractals as the basis, not only in the city itself and the villages but even in the rooms of houses. He commented that "When Europeans first came to Africa, they considered the architecture very disorganised and thus primitive. It never occurred to them that the Africans might have been using a form of mathematics that they hadn't even discovered yet." In a 1996 interview with Michael Silverblatt, David Foster Wallace explained that the structure of the first draft of Infinite Jest he gave to his editor Michael Pietsch was inspired by fractals, specifically the Sierpinski triangle (a.k.a. Sierpinski gasket), but that the edited novel is "more like a lopsided Sierpinsky Gasket". Some works by the Dutch artist M. C. Escher, such as Circle Limit III, contain shapes repeated to infinity that become smaller and smaller as they get near to the edges, in a pattern that would always look the same if zoomed in. Aesthetics and Psychological Effects of Fractal Based Design: Highly prevalent in nature, fractal patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal ‘global-forest’ designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant well-being. These designs are composite fractal patterns consisting of individual fractal ‘tree-seeds’ which combine to create a ‘global fractal forest.’ The local ‘tree-seed’ patterns, global configuration of tree-seed locations, and overall resulting ‘global-forest’ patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal (‘tree-seed’) patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity ‘global-forest’ patterns consisting of ‘tree-seed’ components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant well-being. Physiological responses Humans appear to be especially well-adapted to processing fractal patterns with fractal dimension between 1.3 and 1.5. When humans view fractal patterns with fractal dimension between 1.3 and 1.5, this tends to reduce physiological stress. Applications in technology Fractal antennas Fractal transistor Fractal heat exchangers Digital imaging Architecture Urban growth Classification of histopathology slides Fractal landscape or Coastline complexity Detecting 'life as we don't know it' by fractal analysis Enzymes (Michaelis–Menten kinetics) Generation of new music Signal and image compression Creation of digital photographic enlargements Fractal in soil mechanics Computer and video game design Computer Graphics Organic environments Procedural generation Fractography and fracture mechanics Small angle scattering theory of fractally rough systems T-shirts and other fashion Generation of patterns for camouflage, such as MARPAT Digital sundial Technical analysis of price series Fractals in networks Medicine Neuroscience Diagnostic Imaging Pathology Geology Geography Archaeology Soil mechanics Seismology Search and rescue Morton order space filling curves for GPU cache coherency in texture mapping, rasterisation and indexing of turbulence data. See also Notes References Further reading Stanley, Eugene H, Ostrowsky, N. (eidtors); On Growth and Fractal Form Fractal and Non-Fractal Patterns in Physics, Martinus Nijhoff Publisher, 1986. ISBN 0-89838-850-3 Barnsley, Michael F.; and Rising, Hawley; Fractals Everywhere. Boston: Academic Press Professional, 1993. Duarte, German A.; Fractal Narrative. About the Relationship Between Geometries and Technology and Its Impact on Narrative Spaces. Bielefeld: Transcript, 2014. Falconer, Kenneth; Techniques in Fractal Geometry. John Wiley and Sons, 1997. Jürgens, Hartmut; Peitgen, Heinz-Otto; and Saupe, Dietmar; Chaos and Fractals: New Frontiers of Science. New York: Springer-Verlag, 1992. Mandelbrot, Benoit B.; The Fractal Geometry of Nature. New York: W. H. Freeman and Co., 1982. Peitgen, Heinz-Otto; and Saupe, Dietmar; eds.; The Science of Fractal Images. New York: Springer-Verlag, 1988. Pickover, Clifford A.; ed.; Chaos and Fractals: A Computer Graphical Journey – A 10 Year Compilation of Advanced Research. Elsevier, 1998. Jones, Jesse; Fractals for the Macintosh, Waite Group Press, Corte Madera, CA, 1993. . Lauwerier, Hans; Fractals: Endlessly Repeated Geometrical Figures, Translated by Sophia Gill-Hoffstadt, Princeton University Press, Princeton NJ, 1991. , cloth. paperback. "This book has been written for a wide audience..." Includes sample BASIC programs in an appendix. Wahl, Bernt; Van Roy, Peter; Larsen, Michael; and Kampman, Eric; Exploring Fractals on the Macintosh, Addison Wesley, 1995. Lesmoir-Gordon, Nigel; The Colours of Infinity: The Beauty, The Power and the Sense of Fractals. 2004. (The book comes with a related DVD of the Arthur C. Clarke documentary introduction to the fractal concept and the Mandelbrot set.) Liu, Huajie; Fractal Art, Changsha: Hunan Science and Technology Press, 1997, . Gouyet, Jean-François; Physics and Fractal Structures (Foreword by B. Mandelbrot); Masson, 1996. , and New York: Springer-Verlag, 1996. . Out-of-print. Available in PDF version at. External links "Hunting the Hidden Dimension", PBS NOVA, first aired August 24, 2011 Benoit Mandelbrot: Fractals and the Art of Roughness (), TED, February 2010 Equations of self-similar fractal measure based on the fractional-order calculus(2007) Computational fields of study Mathematical structures Topology
Fractal
[ "Physics", "Mathematics", "Technology" ]
6,071
[ "Functions and mappings", "Mathematical analysis", "Mathematical structures", "Computational fields of study", "Mathematical objects", "Fractals", "Topology", "Space", "Mathematical relations", "Geometry", "Computing and society", "Spacetime" ]
10,915
https://en.wikipedia.org/wiki/Fluid
In physics, a fluid is a liquid, gas, or other material that may continuously move and deform (flow) under an applied shear stress, or external force. They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them. Although the term fluid generally includes both the liquid and gas phases, its definition varies among branches of science. Definitions of solid vary as well, and depending on field, some substances can have both fluid and solid properties. Non-Newtonian fluids like Silly Putty appear to behave similar to a solid when a sudden force is applied. Substances with a very high viscosity such as pitch appear to behave like a solid (see pitch drop experiment) as well. In particle physics, the concept is extended to include fluidic matters other than liquids or gases. A fluid in medicine or biology refers to any liquid constituent of the body (body fluid), whereas "liquid" is not used in this sense. Sometimes liquids given for fluid replacement, either by drinking or by injection, are also called fluids (e.g. "drink plenty of fluids"). In hydraulics, fluid is a term which refers to liquids with certain properties, and is broader than (hydraulic) oils. Physics Fluids display properties such as: lack of resistance to permanent deformation, resisting only relative rates of deformation in a dissipative, frictional manner, and the ability to flow (also described as the ability to take on the shape of the container). These properties are typically a function of their inability to support a shear stress in static equilibrium. By contrast, solids respond to shear either with a spring-like restoring force—meaning that deformations are reversible—or they require a certain initial stress before they deform (see plasticity). Solids respond with restoring forces to both shear stresses and to normal stresses, both compressive and tensile. By contrast, ideal fluids only respond with restoring forces to normal stresses, called pressure: fluids can be subjected both to compressive stress—corresponding to positive pressure—and to tensile stress, corresponding to negative pressure. Solids and liquids both have tensile strengths, which when exceeded in solids creates irreversible deformation and fracture, and in liquids cause the onset of cavitation. Both solids and liquids have free surfaces, which cost some amount of free energy to form. In the case of solids, the amount of free energy to form a given unit of surface area is called surface energy, whereas for liquids the same quantity is called surface tension. In response to surface tension, the ability of liquids to flow results in behaviour differing from that of solids, though at equilibrium both tend to minimise their surface energy: liquids tend to form rounded droplets, whereas pure solids tend to form crystals. Gases, lacking free surfaces, freely diffuse. Modelling In a solid, shear stress is a function of strain, but in a fluid, shear stress is a function of strain rate. A consequence of this behavior is Pascal's law which describes the role of pressure in characterizing a fluid's state. The behavior of fluids can be described by the Navier–Stokes equations—a set of partial differential equations which are based on: continuity (conservation of mass), conservation of linear momentum, conservation of angular momentum, conservation of energy. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics and fluid statics depending on whether the fluid is in motion. Classification of fluids Depending on the relationship between shear stress and the rate of strain and its derivatives, fluids can be characterized as one of the following: Newtonian fluids: where stress is directly proportional to rate of strain Non-Newtonian fluids: where stress is not proportional to rate of strain, its higher powers and derivatives. Newtonian fluids follow Newton's law of viscosity and may be called viscous fluids. Fluids may be classified by their compressibility: Compressible fluid: A fluid that causes volume reduction or density change when pressure is applied to the fluid or when the fluid becomes supersonic. Incompressible fluid: A fluid that does not vary in volume with changes in pressure or flow velocity (i.e., ρ=constant) such as water or oil. Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are called perfect fluids. See also Matter Liquid Gas Supercritical fluid References Fluid dynamics
Fluid
[ "Chemistry", "Engineering" ]
920
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
10,918
https://en.wikipedia.org/wiki/Fibonacci%20sequence
In mathematics, the Fibonacci sequence is a sequence in which each element is the sum of the two elements that precede it. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted . Many writers begin the sequence with 0 and 1, although some authors start it from 1 and 1 and some (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the sequence begins 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ... The Fibonacci numbers were first described in Indian mathematics as early as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from syllables of two lengths. They are named after the Italian mathematician Leonardo of Pisa, also known as Fibonacci, who introduced the sequence to Western European mathematics in his 1202 book . Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, and the arrangement of a pine cone's bracts, though they do not occur in all species. Fibonacci numbers are also strongly related to the golden ratio: Binet's formula expresses the -th Fibonacci number in terms of and the golden ratio, and implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as increases. Fibonacci numbers are also closely related to Lucas numbers, which obey the same recurrence relation and with the Fibonacci numbers form a complementary pair of Lucas sequences. Definition The Fibonacci numbers may be defined by the recurrence relation and for . Under some older definitions, the value is omitted, so that the sequence starts with and the recurrence is valid for . The first 20 Fibonacci numbers are: {| class="wikitable" style="text-align:right" ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! |- | 0 | 1 | 1 | 2 | 3 | 5 | 8 | 13 | 21 | 34 | 55 | 89 | 144 | 233 | 377 | 610 | 987 | 1597 | 2584 | 4181 |} History India The Fibonacci sequence appears in Indian mathematics, in connection with Sanskrit prosody. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long (L) syllables of 2 units duration, juxtaposed with short (S) syllables of 1 unit duration. Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration units is . Knowledge of the Fibonacci sequence was expressed as early as Pingala ( 450 BC–200 BC). Singh cites Pingala's cryptic formula misrau cha ("the two are mixed") and scholars who interpret it in context as saying that the number of patterns for beats () is obtained by adding one [S] to the cases and one [L] to the cases. Bharata Muni also expresses knowledge of the sequence in the Natya Shastra (c. 100 BC–c. 350 AD). However, the clearest exposition of the sequence arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopala (c. 1135): Variations of two earlier meters [is the variation] ... For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21] ... In this way, the process should be followed in all mātrā-vṛttas [prosodic combinations]. Hemachandra (c. 1150) is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number ... of the next mātrā-vṛtta." Europe The Fibonacci sequence first appears in the book (The Book of Calculation, 1202) by Fibonacci where it is used to calculate the growth of rabbit populations. Fibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that: a newly born breeding pair of rabbits are put in a field; each breeding pair mates at the age of one month, and at the end of their second month they always produce another pair of rabbits; and rabbits never die, but continue breeding forever. Fibonacci posed the rabbit math problem: how many pairs will there be in one year? At the end of the first month, they mate, but there is still only 1 pair. At the end of the second month they produce a new pair, so there are 2 pairs in the field. At the end of the third month, the original pair produce a second pair, but the second pair only mate to gestate for a month, so there are 3 pairs in all. At the end of the fourth month, the original pair has produced yet another new pair, and the pair born two months ago also produces their first pair, making 5 pairs. At the end of the -th month, the number of pairs of rabbits is equal to the number of mature pairs (that is, the number of pairs in month ) plus the number of pairs alive last month (month ). The number in the -th month is the -th Fibonacci number. The name "Fibonacci sequence" was first used by the 19th-century number theorist Édouard Lucas. Relation to the golden ratio Closed-form expression Like every sequence defined by a homogeneous linear recurrence with constant coefficients, the Fibonacci numbers have a closed-form expression. It has become known as Binet's formula, named after French mathematician Jacques Philippe Marie Binet, though it was already known by Abraham de Moivre and Daniel Bernoulli: where is the golden ratio, and is its conjugate: Since , this formula can also be written as To see the relation between the sequence and these constants, note that and are both solutions of the equation and thus so the powers of and satisfy the Fibonacci recursion. In other words, It follows that for any values and , the sequence defined by satisfies the same recurrence, If and are chosen so that and then the resulting sequence must be the Fibonacci sequence. This is the same as requiring and satisfy the system of equations: which has solution producing the required formula. Taking the starting values and to be arbitrary constants, a more general solution is: where Computation by rounding Since for all , the number is the closest integer to . Therefore, it can be found by rounding, using the nearest integer function: In fact, the rounding error quickly becomes very small as grows, being less than 0.1 for , and less than 0.01 for . This formula is easily inverted to find an index of a Fibonacci number : Instead using the floor function gives the largest index of a Fibonacci number that is not greater than : where , , and . Magnitude Since Fn is asymptotic to , the number of digits in is asymptotic to . As a consequence, for every integer there are either 4 or 5 Fibonacci numbers with decimal digits. More generally, in the base representation, the number of digits in is asymptotic to Limit of consecutive quotients Johannes Kepler observed that the ratio of consecutive Fibonacci numbers converges. He wrote that "as 5 is to 8 so is 8 to 13, practically, and as 8 is to 13, so is 13 to 21 almost", and concluded that these ratios approach the golden ratio This convergence holds regardless of the starting values and , unless . This can be verified using Binet's formula. For example, the initial values 3 and 2 generate the sequence 3, 2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, ... . The ratio of consecutive elements in this sequence shows the same convergence towards the golden ratio. In general, , because the ratios between consecutive Fibonacci numbers approaches . Decomposition of powers Since the golden ratio satisfies the equation this expression can be used to decompose higher powers as a linear function of lower powers, which in turn can be decomposed all the way down to a linear combination of and 1. The resulting recurrence relationships yield Fibonacci numbers as the linear coefficients: This equation can be proved by induction on : For , it is also the case that and it is also the case that These expressions are also true for if the Fibonacci sequence Fn is extended to negative integers using the Fibonacci rule Identification Binet's formula provides a proof that a positive integer is a Fibonacci number if and only if at least one of or is a perfect square. This is because Binet's formula, which can be written as , can be multiplied by and solved as a quadratic equation in via the quadratic formula: Comparing this to , it follows that In particular, the left-hand side is a perfect square. Matrix form A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is alternatively denoted which yields . The eigenvalues of the matrix are and corresponding to the respective eigenvectors As the initial value is it follows that the th element is From this, the th element in the Fibonacci series may be read off directly as a closed-form expression: Equivalently, the same computation may be performed by diagonalization of through use of its eigendecomposition: where The closed-form expression for the th element in the Fibonacci series is therefore given by which again yields The matrix has a determinant of −1, and thus it is a 2 × 2 unimodular matrix. This property can be understood in terms of the continued fraction representation for the golden ratio : The convergents of the continued fraction for are ratios of successive Fibonacci numbers: is the -th convergent, and the -st convergent can be found from the recurrence relation . The matrix formed from successive convergents of any continued fraction has a determinant of +1 or −1. The matrix representation gives the following closed-form expression for the Fibonacci numbers: For a given , this matrix can be computed in arithmetic operations, using the exponentiation by squaring method. Taking the determinant of both sides of this equation yields Cassini's identity, Moreover, since for any square matrix , the following identities can be derived (they are obtained from two different coefficients of the matrix product, and one may easily deduce the second one from the first one by changing into ), In particular, with , These last two identities provide a way to compute Fibonacci numbers recursively in arithmetic operations. This matches the time for computing the -th Fibonacci number from the closed-form matrix formula, but with fewer redundant steps if one avoids recomputing an already computed Fibonacci number (recursion with memoization). Combinatorial identities Combinatorial proofs Most identities involving Fibonacci numbers can be proved using combinatorial arguments using the fact that can be interpreted as the number of (possibly empty) sequences of 1s and 2s whose sum is . This can be taken as the definition of with the conventions , meaning no such sequence exists whose sum is −1, and , meaning the empty sequence "adds up" to 0. In the following, is the cardinality of a set: In this manner the recurrence relation may be understood by dividing the sequences into two non-overlapping sets where all sequences either begin with 1 or 2: Excluding the first element, the remaining terms in each sequence sum to or and the cardinality of each set is or giving a total of sequences, showing this is equal to . In a similar manner it may be shown that the sum of the first Fibonacci numbers up to the -th is equal to the -th Fibonacci number minus 1. In symbols: This may be seen by dividing all sequences summing to based on the location of the first 2. Specifically, each set consists of those sequences that start until the last two sets each with cardinality 1. Following the same logic as before, by summing the cardinality of each set we see that ... where the last two terms have the value . From this it follows that . A similar argument, grouping the sums by the position of the first 1 rather than the first 2 gives two more identities: and In words, the sum of the first Fibonacci numbers with odd index up to is the -th Fibonacci number, and the sum of the first Fibonacci numbers with even index up to is the -th Fibonacci number minus 1. A different trick may be used to prove or in words, the sum of the squares of the first Fibonacci numbers up to is the product of the -th and -th Fibonacci numbers. To see this, begin with a Fibonacci rectangle of size and decompose it into squares of size ; from this the identity follows by comparing areas: Symbolic method The sequence is also considered using the symbolic method. More precisely, this sequence corresponds to a specifiable combinatorial class. The specification of this sequence is . Indeed, as stated above, the -th Fibonacci number equals the number of combinatorial compositions (ordered partitions) of using terms 1 and 2. It follows that the ordinary generating function of the Fibonacci sequence, , is the rational function Induction proofs Fibonacci identities often can be easily proved using mathematical induction. For example, reconsider Adding to both sides gives and so we have the formula for Similarly, add to both sides of to give Binet formula proofs The Binet formula is This can be used to prove Fibonacci identities. For example, to prove that note that the left hand side multiplied by becomes as required, using the facts and to simplify the equations. Other identities Numerous other identities can be derived using various methods. Here are some of them: Cassini's and Catalan's identities Cassini's identity states that Catalan's identity is a generalization: d'Ocagne's identity where is the -th Lucas number. The last is an identity for doubling ; other identities of this type are by Cassini's identity. These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve to factorize a Fibonacci number. More generally, or alternatively Putting in this formula, one gets again the formulas of the end of above section Matrix form. Generating function The generating function of the Fibonacci sequence is the power series This series is convergent for any complex number satisfying and its sum has a simple closed form: This can be proved by multiplying by : where all terms involving for cancel out because of the defining Fibonacci recurrence relation. The partial fraction decomposition is given by where is the golden ratio and is its conjugate. The related function is the generating function for the negafibonacci numbers, and satisfies the functional equation Using equal to any of 0.01, 0.001, 0.0001, etc. lays out the first Fibonacci numbers in the decimal expansion of . For example, Reciprocal sums Infinite sums over reciprocal Fibonacci numbers can sometimes be evaluated in terms of theta functions. For example, the sum of every odd-indexed reciprocal Fibonacci number can be written as and the sum of squared reciprocal Fibonacci numbers as If we add 1 to each Fibonacci number in the first sum, there is also the closed form and there is a nested sum of squared Fibonacci numbers giving the reciprocal of the golden ratio, The sum of all even-indexed reciprocal Fibonacci numbers is with the Lambert series since So the reciprocal Fibonacci constant is Moreover, this number has been proved irrational by Richard André-Jeannin. Millin's series gives the identity which follows from the closed form for its partial sums as tends to infinity: Primes and divisibility Divisibility properties Every third number of the sequence is even (a multiple of ) and, more generally, every -th number of the sequence is a multiple of Fk. Thus the Fibonacci sequence is an example of a divisibility sequence. In fact, the Fibonacci sequence satisfies the stronger divisibility property where is the greatest common divisor function. (This relation is different if a different indexing convention is used, such as the one that starts the sequence with and .) In particular, any three consecutive Fibonacci numbers are pairwise coprime because both and . That is, for every . Every prime number divides a Fibonacci number that can be determined by the value of modulo 5. If is congruent to 1 or 4 modulo 5, then divides , and if is congruent to 2 or 3 modulo 5, then, divides . The remaining case is that , and in this case divides Fp. These cases can be combined into a single, non-piecewise formula, using the Legendre symbol: Primality testing The above formula can be used as a primality test in the sense that if where the Legendre symbol has been replaced by the Jacobi symbol, then this is evidence that is a prime, and if it fails to hold, then is definitely not a prime. If is composite and satisfies the formula, then is a Fibonacci pseudoprime. When is largesay a 500-bit numberthen we can calculate efficiently using the matrix form. Thus Here the matrix power is calculated using modular exponentiation, which can be adapted to matrices. Fibonacci primes A Fibonacci prime is a Fibonacci number that is prime. The first few are: 2, 3, 5, 13, 89, 233, 1597, 28657, 514229, ... Fibonacci primes with thousands of digits have been found, but it is not known whether there are infinitely many. is divisible by , so, apart from , any Fibonacci prime must have a prime index. As there are arbitrarily long runs of composite numbers, there are therefore also arbitrarily long runs of composite Fibonacci numbers. No Fibonacci number greater than is one greater or one less than a prime number. The only nontrivial square Fibonacci number is 144. Attila Pethő proved in 2001 that there is only a finite number of perfect power Fibonacci numbers. In 2006, Y. Bugeaud, M. Mignotte, and S. Siksek proved that 8 and 144 are the only such non-trivial perfect powers. 1, 3, 21, and 55 are the only triangular Fibonacci numbers, which was conjectured by Vern Hoggatt and proved by Luo Ming. No Fibonacci number can be a perfect number. More generally, no Fibonacci number other than 1 can be multiply perfect, and no ratio of two Fibonacci numbers can be perfect. Prime divisors With the exceptions of 1, 8 and 144 (, and ) every Fibonacci number has a prime factor that is not a factor of any smaller Fibonacci number (Carmichael's theorem). As a result, 8 and 144 ( and ) are the only Fibonacci numbers that are the product of other Fibonacci numbers. The divisibility of Fibonacci numbers by a prime is related to the Legendre symbol which is evaluated as follows: If is a prime number then For example, It is not known whether there exists a prime such that Such primes (if there are any) would be called Wall–Sun–Sun primes. Also, if is an odd prime number then: Example 1. , in this case and we have: Example 2. , in this case and we have: Example 3. , in this case and we have: Example 4. , in this case and we have: For odd , all odd prime divisors of are congruent to 1 modulo 4, implying that all odd divisors of (as the products of odd prime divisors) are congruent to 1 modulo 4. For example, All known factors of Fibonacci numbers for all are collected at the relevant repositories. Periodicity modulo n If the members of the Fibonacci sequence are taken mod , the resulting sequence is periodic with period at most . The lengths of the periods for various form the so-called Pisano periods. Determining a general formula for the Pisano periods is an open problem, which includes as a subproblem a special instance of the problem of finding the multiplicative order of a modular integer or of an element in a finite field. However, for any particular , the Pisano period may be found as an instance of cycle detection. Generalizations The Fibonacci sequence is one of the simplest and earliest known sequences defined by a recurrence relation, and specifically by a linear difference equation. All these sequences may be viewed as generalizations of the Fibonacci sequence. In particular, Binet's formula may be generalized to any sequence that is a solution of a homogeneous linear difference equation with constant coefficients. Some specific examples that are close, in some sense, to the Fibonacci sequence include: Generalizing the index to negative integers to produce the negafibonacci numbers. Generalizing the index to real numbers using a modification of Binet's formula. Starting with other integers. Lucas numbers have , , and . Primefree sequences use the Fibonacci recursion with other starting points to generate sequences in which all numbers are composite. Letting a number be a linear function (other than the sum) of the 2 preceding numbers. The Pell numbers have . If the coefficient of the preceding value is assigned a variable value , the result is the sequence of Fibonacci polynomials. Not adding the immediately preceding numbers. The Padovan sequence and Perrin numbers have . Generating the next number by adding 3 numbers (tribonacci numbers), 4 numbers (tetranacci numbers), or more. The resulting sequences are known as n-Step Fibonacci numbers. Applications Mathematics The Fibonacci numbers occur as the sums of binomial coefficients in the "shallow" diagonals of Pascal's triangle: This can be proved by expanding the generating function and collecting like terms of . To see how the formula is used, we can arrange the sums by the number of terms present: {| | | |- | | | | | |- | | | | |} which is , where we are choosing the positions of twos from terms. These numbers also give the solution to certain enumerative problems, the most common of which is that of counting the number of ways of writing a given number as an ordered sum of 1s and 2s (called compositions); there are ways to do this (equivalently, it's also the number of domino tilings of the rectangle). For example, there are ways one can climb a staircase of 5 steps, taking one or two steps at a time: {| | | | | | | |- | | | | |} The figure shows that 8 can be decomposed into 5 (the number of ways to climb 4 steps, followed by a single-step) plus 3 (the number of ways to climb 3 steps, followed by a double-step). The same reasoning is applied recursively until a single step, of which there is only one way to climb. The Fibonacci numbers can be found in different ways among the set of binary strings, or equivalently, among the subsets of a given set. The number of binary strings of length without consecutive s is the Fibonacci number . For example, out of the 16 binary strings of length 4, there are without consecutive s—they are 0000, 0001, 0010, 0100, 0101, 1000, 1001, and 1010. Such strings are the binary representations of Fibbinary numbers. Equivalently, is the number of subsets of without consecutive integers, that is, those for which for every . A bijection with the sums to is to replace 1 with 0 and 2 with 10, and drop the last zero. The number of binary strings of length without an odd number of consecutive s is the Fibonacci number . For example, out of the 16 binary strings of length 4, there are without an odd number of consecutive s—they are 0000, 0011, 0110, 1100, 1111. Equivalently, the number of subsets of without an odd number of consecutive integers is . A bijection with the sums to is to replace 1 with 0 and 2 with 11. The number of binary strings of length without an even number of consecutive s or s is . For example, out of the 16 binary strings of length 4, there are without an even number of consecutive s or s—they are 0001, 0111, 0101, 1000, 1010, 1110. There is an equivalent statement about subsets. Yuri Matiyasevich was able to show that the Fibonacci numbers can be defined by a Diophantine equation, which led to his solving Hilbert's tenth problem. The Fibonacci numbers are also an example of a complete sequence. This means that every positive integer can be written as a sum of Fibonacci numbers, where any one number is used once at most. Moreover, every positive integer can be written in a unique way as the sum of one or more distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. This is known as Zeckendorf's theorem, and a sum of Fibonacci numbers that satisfies these conditions is called a Zeckendorf representation. The Zeckendorf representation of a number can be used to derive its Fibonacci coding. Starting with 5, every second Fibonacci number is the length of the hypotenuse of a right triangle with integer sides, or in other words, the largest number in a Pythagorean triple, obtained from the formula The sequence of Pythagorean triangles obtained from this formula has sides of lengths (3,4,5), (5,12,13), (16,30,34), (39,80,89), ... . The middle side of each of these triangles is the sum of the three sides of the preceding triangle. The Fibonacci cube is an undirected graph with a Fibonacci number of nodes that has been proposed as a network topology for parallel computing. Fibonacci numbers appear in the ring lemma, used to prove connections between the circle packing theorem and conformal maps. Computer science The Fibonacci numbers are important in computational run-time analysis of Euclid's algorithm to determine the greatest common divisor of two integers: the worst case input for this algorithm is a pair of consecutive Fibonacci numbers. Fibonacci numbers are used in a polyphase version of the merge sort algorithm in which an unsorted list is divided into two lists whose lengths correspond to sequential Fibonacci numbers—by dividing the list so that the two parts have lengths in the approximate proportion . A tape-drive implementation of the polyphase merge sort was described in The Art of Computer Programming. A Fibonacci tree is a binary tree whose child trees (recursively) differ in height by exactly 1. So it is an AVL tree, and one with the fewest nodes for a given height—the "thinnest" AVL tree. These trees have a number of vertices that is a Fibonacci number minus one, an important fact in the analysis of AVL trees. Fibonacci numbers are used by some pseudorandom number generators. Fibonacci numbers arise in the analysis of the Fibonacci heap data structure. A one-dimensional optimization method, called the Fibonacci search technique, uses Fibonacci numbers. The Fibonacci number series is used for optional lossy compression in the IFF 8SVX audio file format used on Amiga computers. The number series compands the original audio wave similar to logarithmic methods such as μ-law. Some Agile teams use a modified series called the "Modified Fibonacci Series" in planning poker, as an estimation tool. Planning Poker is a formal part of the Scaled Agile Framework. Fibonacci coding Negafibonacci coding Nature Fibonacci sequences appear in biological settings, such as branching in trees, arrangement of leaves on a stem, the fruitlets of a pineapple, the flowering of artichoke, the arrangement of a pine cone, and the family tree of honeybees. Kepler pointed out the presence of the Fibonacci sequence in nature, using it to explain the (golden ratio-related) pentagonal form of some flowers. Field daisies most often have petals in counts of Fibonacci numbers. In 1830, Karl Friedrich Schimper and Alexander Braun discovered that the parastichies (spiral phyllotaxis) of plants were frequently expressed as fractions involving Fibonacci numbers. Przemysław Prusinkiewicz advanced the idea that real instances can in part be understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars. A model for the pattern of florets in the head of a sunflower was proposed by in 1979. This has the form where is the index number of the floret and is a constant scaling factor; the florets thus lie on Fermat's spiral. The divergence angle, approximately 137.51°, is the golden angle, dividing the circle in the golden ratio. Because this ratio is irrational, no floret has a neighbor at exactly the same angle from the center, so the florets pack efficiently. Because the rational approximations to the golden ratio are of the form , the nearest neighbors of floret number are those at for some index , which depends on , the distance from the center. Sunflowers and similar flowers most commonly have spirals of florets in clockwise and counter-clockwise directions in the amount of adjacent Fibonacci numbers, typically counted by the outermost range of radii. Fibonacci numbers also appear in the ancestral pedigrees of bees (which are haplodiploids), according to the following rules: If an egg is laid but not fertilized, it produces a male (or drone bee in honeybees). If, however, an egg is fertilized, it produces a female. Thus, a male bee always has one parent, and a female bee has two. If one traces the pedigree of any male bee (1 bee), he has 1 parent (1 bee), 2 grandparents, 3 great-grandparents, 5 great-great-grandparents, and so on. This sequence of numbers of parents is the Fibonacci sequence. The number of ancestors at each level, , is the number of female ancestors, which is , plus the number of male ancestors, which is . This is under the unrealistic assumption that the ancestors at each level are otherwise unrelated. It has similarly been noticed that the number of possible ancestors on the human X chromosome inheritance line at a given ancestral generation also follows the Fibonacci sequence. A male individual has an X chromosome, which he received from his mother, and a Y chromosome, which he received from his father. The male counts as the "origin" of his own X chromosome (), and at his parents' generation, his X chromosome came from a single parent . The male's mother received one X chromosome from her mother (the son's maternal grandmother), and one from her father (the son's maternal grandfather), so two grandparents contributed to the male descendant's X chromosome . The maternal grandfather received his X chromosome from his mother, and the maternal grandmother received X chromosomes from both of her parents, so three great-grandparents contributed to the male descendant's X chromosome . Five great-great-grandparents contributed to the male descendant's X chromosome , etc. (This assumes that all ancestors of a given descendant are independent, but if any genealogy is traced far enough back in time, ancestors begin to appear on multiple lines of the genealogy, until eventually a population founder appears on all lines of the genealogy.) Other In optics, when a beam of light shines at an angle through two stacked transparent plates of different materials of different refractive indexes, it may reflect off three surfaces: the top, middle, and bottom surfaces of the two plates. The number of different beam paths that have reflections, for , is the -th Fibonacci number. (However, when , there are three reflection paths, not two, one for each of the three surfaces.) Fibonacci retracement levels are widely used in technical analysis for financial market trading. Since the conversion factor 1.609344 for miles to kilometers is close to the golden ratio, the decomposition of distance in miles into a sum of Fibonacci numbers becomes nearly the kilometer sum when the Fibonacci numbers are replaced by their successors. This method amounts to a radix 2 number register in golden ratio base being shifted. To convert from kilometers to miles, shift the register down the Fibonacci sequence instead. The measured values of voltages and currents in the infinite resistor chain circuit (also called the resistor ladder or infinite series-parallel circuit) follow the Fibonacci sequence. The intermediate results of adding the alternating series and parallel resistances yields fractions composed of consecutive Fibonacci numbers. The equivalent resistance of the entire circuit equals the golden ratio. Brasch et al. 2012 show how a generalized Fibonacci sequence also can be connected to the field of economics. In particular, it is shown how a generalized Fibonacci sequence enters the control function of finite-horizon dynamic optimisation problems with one state and one control variable. The procedure is illustrated in an example often referred to as the Brock–Mirman economic growth model. Mario Merz included the Fibonacci sequence in some of his artworks beginning in 1970. Joseph Schillinger (1895–1943) developed a system of composition which uses Fibonacci intervals in some of its melodies; he viewed these as the musical counterpart to the elaborate harmony evident within nature. See also . See also References Explanatory footnotes Citations Works cited . . . . . External links - animation of sequence, spiral, golden ratio, rabbit pair growth. Examples in art, music, architecture, nature, and astronomy Periods of Fibonacci Sequences Mod m at MathPages Scientists find clues to the formation of Fibonacci spirals in nature Articles containing proofs Integer sequences
Fibonacci sequence
[ "Mathematics" ]
7,502
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recurrence relations", "Recreational mathematics", "Fibonacci numbers", "Mathematical objects", "Golden ratio", "Combinatorics", "Mathematical relations", "Articles containing proofs", "Numbers", "Number theory" ]
10,933
https://en.wikipedia.org/wiki/Functional%20programming
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8). History The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s. Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming. The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced. Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q. In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language. John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming. The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML. In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming. In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept. Functional programming continues to be used in commercial settings. Concepts A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts. First-class and higher-order functions Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator , which returns the derivative of a function . Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one. Pure functions Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.) If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe). If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation). While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics. Recursion Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches. The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages. Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. Strict versus non-strict evaluation Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: print length([2+1, 3*2, 1/0, 5-4]) fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell. argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them. Type systems Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified. A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#. Referential transparency Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent. Consider C assignment statement x=x*10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x*10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. Data structures Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created. Comparison to imperative programming Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. Imperative vs. functional programming The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable "result". Traditional imperative loop: const numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; let result = 0; for (let i = 0; i < numList.length; i++) { if (numList[i] % 2 === 0) { result += numList[i] * 10; } } Functional programming with higher-order functions: const result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] .filter(n => n % 2 === 0) .map(a => a * 10) .reduce((a, b) => a + b, 0);Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule). Simulating state There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries). Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged. Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations. Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit. Efficiency issues Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes. Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka. Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) . Abstraction cost Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure: (even? 5) (.equals (mod 5 2) 0) When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as: (defn even? "Returns true if n is even, throws an exception if n is not an integer" {:added "1.0" :static true} [n] (if (integer? n) (zero? (bit-and (clojure.lang.RT/uncheckedLongCast n) 1)) (throw (IllegalArgumentException. (str "Argument must be an integer: " n))))) has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining. One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime. Functional programming in non-functional languages It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions. JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin. In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript. Comparison to logic programming Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: mother(charles, elizabeth). mother(harry, diana). The program can be queried, like a functional program, to generate mothers from children: ?- mother(harry, X). X = diana. ?- mother(charles, X). X = elizabeth. But it can also be queried backwards, to generate children: ?- mother(X, elizabeth). X = charles. ?- mother(X, diana). X = harry. It can even be used to generate all instances of the mother relation: ?- mother(X, Y). X = charles, Y = elizabeth. X = harry, Y = diana. Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: maternal_grandmother(X) = mother(mother(X)). The same definition in relational notation needs to be written in the unnested form: maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y). Here :- means if and , means and. However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming: grandparent(X) := parent(parent(X)). parent(X) := mother(X). parent(X) := father(X). mother(charles) := elizabeth. father(charles) := phillip. mother(harry) := diana. father(harry) := charles. ?- grandparent(X,Y). X = harry, Y = elizabeth. X = harry, Y = phillip. Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. Applications Text editors Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages. Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family. Spreadsheets Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature. Microservices Due to their composability, functional programming paradigms can be suitable for microservices-based architectures. Academia Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming. Industry Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming. Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie. Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory. Education Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods. Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. In particular, Scheme has been a relatively popular choice for teaching programming for years. See also Eager evaluation Functional reactive programming Inductive functional programming List of functional programming languages List of functional programming topics Nested function Purely functional programming Notes and references Further reading Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998. Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958. Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005. Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996. MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990. Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996. Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998. Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996. External links An introduction Functional programming in Python (by David Mertz): part 1, part 2, part 3 Programming paradigms Articles with example C code
Functional programming
[ "Technology" ]
7,472
[ "Programming language comparisons", "Computing comparisons" ]
10,939
https://en.wikipedia.org/wiki/Formal%20language
In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar. The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules. In computer science, formal languages are used, among others, as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way. The field of formal language theory studies primarily the purely syntactic aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages. History In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes. Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in Begriffsschrift (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a "formal language of pure language." In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble", and later devised the canonical system for the creation of formal languages. In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools. Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60. Words over an alphabet An alphabet, in the context of formal languages, can be any set; its elements are called letters. An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode. A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ* (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word. In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor. Definition A formal language L over an alphabet Σ is a subset of Σ*, that is, a set of words over that alphabet. Sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of 'well-formed expressions'. In computer science and mathematics, which do not usually deal with natural languages, the adjective "formal" is often omitted as redundant. While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it. Examples The following rules describe a formal language  over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}: Every nonempty string that does not contain "+" or "=" and does not start with "0" is in . The string "0" is in . A string containing "=" is in  if and only if there is exactly one "=", and it separates two valid strings of . A string containing "+" but not "=" is in  if and only if every "+" in the string separates two valid strings of . No string is in  other than those implied by the previous rules. Under these rules, the string "23+4=555" is in , but the string "=234=+" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc. Constructions For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language  as just  = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all ( = ∅). However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {a, b, ab, cba}. Here are some examples of formal languages: = Σ*, the set of all words over Σ; = {a}* = {an}, where n ranges over the natural numbers and "an" means "a" repeated n times (this is the set of words consisting only of the symbol "a"); the set of syntactically correct programs in a given programming language (the syntax of which is usually defined by a context-free grammar); the set of inputs upon which a certain Turing machine halts; or the set of maximal strings of alphanumeric ASCII characters on this line, i.e., the set {the, set, of, maximal, strings, alphanumeric, ASCII, characters, on, this, line, i, e}. Language-specification formalisms Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as those strings generated by some formal grammar; those strings described or matched by a particular regular expression; those strings accepted by some automaton, such as a Turing machine or finite-state automaton; those strings for which some decision procedure (an algorithm that asks a sequence of related YES/NO questions) produces the answer YES. Typical questions asked about such formalisms include: What is their expressive power? (Can formalism X describe every language that formalism Y can describe? Can it describe other languages?) What is their recognizability? (How difficult is it to decide whether a given word belongs to a language described by formalism X?) What is their comparability? (How difficult is it to decide whether two languages, one described in formalism X and one in formalism Y, or in X again, are actually the same language?). Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications. Operations on languages Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations. Examples: suppose and are languages over some common alphabet . The concatenation consists of all strings of the form where is a string from and is a string from . The intersection of and consists of all strings that are contained in both languages The complement of with respect to consists of all strings over that are not in . The Kleene star: the language consisting of all words that are concatenations of zero or more words in the original language; Reversal: Let ε be the empty word, then , and for each non-empty word (where are elements of some alphabet), let , then for a formal language , . String homomorphism Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right. {| class="wikitable" |+ align="top"|Closure properties of language families ( Op where both and are in the language family given by the column). After Hopcroft and Ullman. |- ! Operation ! ! Regular ! DCFL ! CFL ! IND ! CSL ! recursive ! RE |- |Union | | | | | | | | |- |Intersection | | | | | | | | |- |Complement | | | | | | | | |- |Concatenation | | | | | | | | |- |Kleene star | | | | | | | | |- |(String) homomorphism | | | | | | | | |- |ε-free (string) homomorphism | | | | | | | | |- |Substitution | | | | | | | | |- |Inverse homomorphism | | | | | | | | |- |Reverse | | | | | | | | |- |Intersection with a regular language | | | | | | | | |} Applications Programming languages A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like lex, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like yacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built. Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute. Formal theories, systems, and proofs In mathematical logic, a formal theory is a set of sentences expressed in a formal language. A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems and may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance). A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions. Interpretations and models Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value. The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes true. See also Combinatorics on words Formal method Free monoid Grammar framework Mathematical notation String (computer science) Notes References Citations Sources Works cited General references A. G. Hamilton, Logic for Mathematicians, Cambridge University Press, 1978, . Seymour Ginsburg, Algebraic and automata theoretic properties of formal languages, North-Holland, 1975, . Michael A. Harrison, Introduction to Formal Language Theory, Addison-Wesley, 1978. Grzegorz Rozenberg, Arto Salomaa, Handbook of Formal Languages: Volume I-III, Springer, 1997, . Patrick Suppes, Introduction to Logic, D. Van Nostrand, 1957, . External links University of Maryland, Formal Language Definitions James Power, "Notes on Formal Language Theory and Parsing" , 29 November 2002. Drafts of some chapters in the "Handbook of Formal Language Theory", Vol. 1–3, G. Rozenberg and A. Salomaa (eds.), Springer Verlag, (1997): Alexandru Mateescu and Arto Salomaa, "Preface" in Vol.1, pp. v–viii, and "Formal Languages: An Introduction and a Synopsis", Chapter 1 in Vol. 1, pp. 1–39 Sheng Yu, "Regular Languages", Chapter 2 in Vol. 1 Jean-Michel Autebert, Jean Berstel, Luc Boasson, "Context-Free Languages and Push-Down Automata", Chapter 3 in Vol. 1 Christian Choffrut and Juhani Karhumäki, "Combinatorics of Words", Chapter 6 in Vol. 1 Tero Harju and Juhani Karhumäki, "Morphisms", Chapter 7 in Vol. 1, pp. 439–510 Jean-Eric Pin, "Syntactic semigroups", Chapter 10 in Vol. 1, pp. 679–746 M. Crochemore and C. Hancart, "Automata for matching patterns", Chapter 9 in Vol. 2 Dora Giammarresi, Antonio Restivo, "Two-dimensional Languages", Chapter 4 in Vol. 3, pp. 215–267 Theoretical computer science Combinatorics on words
Formal language
[ "Mathematics" ]
3,721
[ "Mathematical linguistics", "Theoretical computer science", "Applied mathematics", "Mathematical logic", "Formal languages", "Combinatorics", "Combinatorics on words" ]
10,949
https://en.wikipedia.org/wiki/Four%20color%20theorem
In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Adjacent means that two regions share a common boundary of non-zero length (i.e., not merely a corner where three or more regions meet). It was the first major theorem to be proved using a computer. Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubts remain. The theorem is a stronger version of the five color theorem, which can be shown using a significantly simpler argument. Although the weaker five color theorem was proven already in the 1800s, the four color theorem resisted until 1976 when it was proven by Kenneth Appel and Wolfgang Haken. This came after many false proofs and mistaken counterexamples in the preceding decades. The Appel–Haken proof proceeds by analyzing a very large number of reducible configurations. This was improved upon in 1997 by Robertson, Sanders, Seymour, and Thomas, who have managed to decrease the number of such configurations to 633 – still an extremely long case analysis. In 2005, the theorem was verified by Georges Gonthier using a general-purpose theorem-proving software. Formulation In graph-theoretic terms, the theorem states that for loopless planar graph , its chromatic number is . The intuitive statement of the four color theorem – "given any separation of a plane into contiguous regions, the regions can be colored using at most four colors so that no two adjacent regions have the same color" – needs to be interpreted appropriately to be correct. First, regions are adjacent if they share a boundary segment; two regions that share only isolated boundary points are not considered adjacent. (Otherwise, a map in a shape of a pie chart would make an arbitrarily large number of regions 'adjacent' to each other at a common corner, and require arbitrarily large number of colors as a result.) Second, bizarre regions, such as those with finite area but infinitely long perimeter, are not allowed; maps with such regions can require more than four colors. (To be safe, we can restrict to regions whose boundaries consist of finitely many straight line segments. It is allowed that a region has enclaves, that is it entirely surrounds one or more other regions.) Note that the notion of "contiguous region" (technically: connected open subset of the plane) is not the same as that of a "country" on regular maps, since countries need not be contiguous (they may have exclaves; e.g., the Cabinda Province as part of Angola, Nakhchivan as part of Azerbaijan, Kaliningrad as part of Russia, France with its overseas territories, and Alaska as part of the United States are not contiguous). If we required the entire territory of a country to receive the same color, then four colors are not always sufficient. For instance, consider a simplified map: In this map, the two regions labeled A belong to the same country. If we wanted those regions to receive the same color, then five colors would be required, since the two A regions together are adjacent to four other regions, each of which is adjacent to all the others. A simpler statement of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar: it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves without crossings that lead from one region's vertex, across a shared boundary segment, to an adjacent region's vertex. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short: every planar graph is four-colorable. History Early proof attempts As far as is known, the conjecture was first proposed on October 23, 1852, when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie's brother, Frederick, was a student of Augustus De Morgan (the former advisor of Francis) at University College London. Francis inquired with Frederick regarding it, who then took it to De Morgan. (Francis Guthrie graduated later in 1852, and later became a professor of mathematics in South Africa.) According to De Morgan: A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact—and do not yet. He says that if a figure be any how divided and the compartments differently colored so that figures with any portion of common boundary line are differently colored—four colors may be wanted but not more—the following is his case in which four colors are wanted. Query cannot a necessity for five or more be invented... "F.G.", perhaps one of the two Guthries, published the question in The Athenaeum in 1854, and De Morgan posed the question again in the same magazine in 1860. Another early published reference by in turn credits the conjecture to De Morgan. There were several early failed attempts at proving the theorem. De Morgan believed that it followed from a simple fact about four regions, though he didn't believe that fact could be derived from more elementary facts. This arises in the following way. We never need four colours in a neighborhood unless there be four counties, each of which has boundary lines in common with each of the other three. Such a thing cannot happen with four areas unless one or more of them be inclosed by the rest; and the colour used for the inclosed county is thus set free to go on with. Now this principle, that four areas cannot each have common boundary with all the other three without inclosure, is not, we fully believe, capable of demonstration upon anything more evident and more elementary; it must stand as a postulate. One proposed proof was given by Alfred Kempe in 1879, which was widely acclaimed; another was given by Peter Guthrie Tait in 1880. It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood, and in 1891, Tait's proof was shown incorrect by Julius Petersen—each false proof stood unchallenged for 11 years. In 1890, in addition to exposing the flaw in Kempe's proof, Heawood proved the five color theorem and generalized the four color conjecture to surfaces of arbitrary genus. Tait, in 1880, showed that the four color theorem is equivalent to the statement that a certain type of graph (called a snark in modern terminology) must be non-planar. In 1943, Hugo Hadwiger formulated the Hadwiger conjecture, a far-reaching generalization of the four-color problem that still remains unsolved. Proof by computer During the 1960s and 1970s, German mathematician Heinrich Heesch developed methods of using computers to search for a proof. Notably he was the first to use discharging for proving the theorem, which turned out to be important in the unavoidability portion of the subsequent Appel–Haken proof. He also expanded on the concept of reducibility and, along with Ken Durre, developed a computer test for it. Unfortunately, at this critical juncture, he was unable to procure the necessary supercomputer time to continue his work. Others took up his methods, including his computer-assisted approach. While other teams of mathematicians were racing to complete proofs, Kenneth Appel and Wolfgang Haken at the University of Illinois announced, on June 21, 1976, that they had proved the theorem. They were assisted in some algorithmic work by John A. Koch. If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist, through the use of two technical concepts: An unavoidable set is a set of configurations such that every map that satisfies some necessary conditions for being a minimal non-4-colorable triangulation (such as having minimum degree 5) must have at least one configuration from this set. A reducible configuration is an arrangement of countries that cannot occur in a minimal counterexample. If a map contains a reducible configuration, the map can be reduced to a smaller map. This smaller map has the condition that if it can be colored with four colors, this also applies to the original map. This implies that if the original map cannot be colored with four colors the smaller map cannot either and so the original map is not minimal. Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,834 reducible configurations (later reduced to 1,482) which had to be checked one by one by computer and took over a thousand hours. This reducibility part of the work was independently double checked with different programs and computers. However, the unavoidability part of the proof was verified in over 400 pages of microfiche, which had to be checked by hand with the assistance of Haken's daughter Dorothea Blostein. Appel and Haken's announcement was widely reported by the news media around the world, and the math department at the University of Illinois used a postmark stating "Four colors suffice." At the same time the unusual nature of the proof—it was the first major theorem to be proved with extensive computer assistance—and the complexity of the human-verifiable portion aroused considerable controversy. In the early 1980s, rumors spread of a flaw in the Appel–Haken proof. Ulrich Schmidt at RWTH Aachen had examined Appel and Haken's proof for his master's thesis that was published in 1981. He had checked about 40% of the unavoidability portion and found a significant error in the discharging procedure . In 1986, Appel and Haken were asked by the editor of Mathematical Intelligencer to write an article addressing the rumors of flaws in their proof. They replied that the rumors were due to a "misinterpretation of [Schmidt's] results" and obliged with a detailed article. Their magnum opus, Every Planar Map is Four-Colorable, a book claiming a complete and detailed proof (with a microfiche supplement of over 400 pages), appeared in 1989; it explained and corrected the error discovered by Schmidt as well as several further errors found by others. Simplification and verification Since the proving of the theorem, a new approach has led to both a shorter proof and a more efficient algorithm for 4-coloring maps. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic-time algorithm (requiring only O(n2) time, where n is the number of vertices), improving on a quartic-time algorithm based on Appel and Haken's proof. The new proof, based on the same ideas, is similar to Appel and Haken's but more efficient because it reduces the complexity of the problem and requires checking only 633 reducible configurations. Both the unavoidability and reducibility parts of this new proof must be executed by a computer and are impractical to check by hand. In 2001, the same authors announced an alternative proof, by proving the snark conjecture. This proof remains unpublished, however. In 2005, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq proof assistant. This removed the need to trust the various computer programs used to verify particular cases; it is only necessary to trust the Coq kernel. Summary of proof ideas The following discussion is a summary based on the introduction to Every Planar Map is Four Colorable . Although flawed, Kempe's original purported proof of the four color theorem provided some of the basic tools later used to prove it. The explanation here is reworded in terms of the modern graph theory formulation above. Kempe's argument goes as follows. First, if planar regions separated by the graph are not triangulated (i.e., do not have exactly three edges in their boundaries), we can add edges without introducing new vertices in order to make every region triangular, including the unbounded outer region. If this triangulated graph is colorable using four colors or fewer, so is the original graph since the same coloring is valid if edges are removed. So it suffices to prove the four color theorem for triangulated graphs to prove it for all planar graphs, and without loss of generality we assume the graph is triangulated. Suppose v, e, and f are the number of vertices, edges, and regions (faces). Since each region is triangular and each edge is shared by two regions, we have that 2e = 3f. This together with Euler's formula, v − e + f = 2, can be used to show that 6v − 2e = 12. Now, the degree of a vertex is the number of edges abutting it. If vn is the number of vertices of degree n and D is the maximum degree of any vertex, But since 12 > 0 and 6 − i ≤ 0 for all i ≥ 6, this demonstrates that there is at least one vertex of degree 5 or less. If there is a graph requiring 5 colors, then there is a minimal such graph, where removing any vertex makes it four-colorable. Call this graph G. Then G cannot have a vertex of degree 3 or less, because if d(v) ≤ 3, we can remove v from G, four-color the smaller graph, then add back v and extend the four-coloring to it by choosing a color different from its neighbors. Kempe also showed correctly that G can have no vertex of degree 4. As before we remove the vertex v and four-color the remaining vertices. If all four neighbors of v are different colors, say red, green, blue, and yellow in clockwise order, we look for an alternating path of vertices colored red and blue joining the red and blue neighbors. Such a path is called a Kempe chain. There may be a Kempe chain joining the red and blue neighbors, and there may be a Kempe chain joining the green and yellow neighbors, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be colored. Suppose it is the red and blue neighbors that are not chained together. Explore all vertices attached to the red neighbor by red-blue alternating paths, and then reverse the colors red and blue on all these vertices. The result is still a valid four-coloring, and v can now be added back and colored red. This leaves only the case where G has a vertex of degree 5; but Kempe's argument was flawed for this case. Heawood noticed Kempe's mistake and also observed that if one was satisfied with proving only five colors are needed, one could run through the above argument (changing only that the minimal counterexample requires 6 colors) and use Kempe chains in the degree 5 situation to prove the five color theorem. In any case, to deal with this degree 5 vertex case requires a more complicated notion than removing a vertex. Rather the form of the argument is generalized to considering configurations, which are connected subgraphs of G with the degree of each vertex (in G) specified. For example, the case described in degree 4 vertex situation is the configuration consisting of a single vertex labelled as having degree 4 in G. As above, it suffices to demonstrate that if the configuration is removed and the remaining graph four-colored, then the coloring can be modified in such a way that when the configuration is re-added, the four-coloring can be extended to it as well. A configuration for which this is possible is called a reducible configuration. If at least one of a set of configurations must occur somewhere in G, that set is called unavoidable. The argument above began by giving an unavoidable set of five configurations (a single vertex with degree 1, a single vertex with degree 2, ..., a single vertex with degree 5) and then proceeded to show that the first 4 are reducible; to exhibit an unavoidable set of configurations where every configuration in the set is reducible would prove the theorem. Because G is triangular, the degree of each vertex in a configuration is known, and all edges internal to the configuration are known, the number of vertices in G adjacent to a given configuration is fixed, and they are joined in a cycle. These vertices form the ring of the configuration; a configuration with k vertices in its ring is a k-ring configuration, and the configuration together with its ring is called the ringed configuration. As in the simple cases above, one may enumerate all distinct four-colorings of the ring; any coloring that can be extended without modification to a coloring of the configuration is called initially good. For example, the single-vertex configuration above with 3 or fewer neighbors were initially good. In general, the surrounding graph must be systematically recolored to turn the ring's coloring into a good one, as was done in the case above where there were 4 neighbors; for a general configuration with a larger ring, this requires more complex techniques. Because of the large number of distinct four-colorings of the ring, this is the primary step requiring computer assistance. Finally, it remains to identify an unavoidable set of configurations amenable to reduction by this procedure. The primary method used to discover such a set is the method of discharging. The intuitive idea underlying discharging is to consider the planar graph as an electrical network. Initially positive and negative "electrical charge" is distributed amongst the vertices so that the total is positive. Recall the formula above: Each vertex is assigned an initial charge of 6-deg(v). Then one "flows" the charge by systematically redistributing the charge from a vertex to its neighboring vertices according to a set of rules, the discharging procedure. Since charge is preserved, some vertices still have positive charge. The rules restrict the possibilities for configurations of positively charged vertices, so enumerating all such possible configurations gives an unavoidable set. As long as some member of the unavoidable set is not reducible, the discharging procedure is modified to eliminate it (while introducing other configurations). Appel and Haken's final discharging procedure was extremely complex and, together with a description of the resulting unavoidable configuration set, filled a 400-page volume, but the configurations it generated could be checked mechanically to be reducible. Verifying the volume describing the unavoidable configuration set itself was done by peer review over a period of several years. A technical detail not discussed here but required to complete the proof is immersion reducibility. False disproofs The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, The New York Times refused, as a matter of policy, to report on the Appel–Haken proof, fearing that the proof would be shown false like the ones before it. Some alleged proofs, like Kempe's and Tait's mentioned above, stood under public scrutiny for over a decade before they were refuted. But many more, authored by amateurs, were never published at all. Generally, the simplest, though invalid, counterexamples attempt to create one region which touches all other regions. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors. This trick can be generalized: there are many maps where if the colors of some regions are selected beforehand, it becomes impossible to color the remaining regions without exceeding four colors. A casual verifier of the counterexample may not think to change the colors of these regions, so that the counterexample will appear as though it is valid. Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors. Other false disproofs violate the assumptions of the theorem, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point. Three-coloring While every planar map can be colored with four colors, it is NP-complete in complexity to decide whether an arbitrary planar map can be colored with just three colors. A cubic map can be colored with only three colors if and only if each interior region has an even number of neighboring regions. In the US states map example, landlocked Missouri (MO) has eight neighbors (an even number): it must be differently colored from all of them, but the neighbors can alternate colors, thus this part of the map needs only three colors. However, landlocked Nevada (NV) has five neighbors (an odd number): these neighbors require three colors, and it must be differently colored from them, thus four colors are needed here. Generalizations Infinite graphs The four color theorem applies not only to finite planar graphs, but also to infinite graphs that can be drawn without crossings in the plane, and even more generally to infinite graphs (possibly with an uncountable number of vertices) for which every finite subgraph is planar. To prove this, one can combine a proof of the theorem for finite planar graphs with the De Bruijn–Erdős theorem stating that, if every finite subgraph of an infinite graph is k-colorable, then the whole graph is also k-colorable . This can also be seen as an immediate consequence of Kurt Gödel's compactness theorem for first-order logic, simply by expressing the colorability of an infinite graph with a set of logical formulae. Higher surfaces One can also consider the coloring problem on surfaces other than the plane. The problem on the sphere or cylinder is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number p of colors needed depends on the surface's Euler characteristic χ according to the formula where the outermost brackets denote the floor function. Alternatively, for an orientable surface the formula can be given in terms of the genus of a surface, g: This formula, the Heawood conjecture, was proposed by P. J. Heawood in 1890 and, after contributions by several people, proved by Gerhard Ringel and J. W. T. Youngs in 1968. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 (hence the formula gives p = 7) but requires only 6 colors, as shown by Philip Franklin in 1934. For example, the torus has Euler characteristic χ = 0 (and genus g = 1) and thus p = 7, so no more than 7 colors are required to color any map on a torus. This upper bound of 7 is sharp: certain toroidal polyhedra such as the Szilassi polyhedron require seven colors. A Möbius strip requires six colors as do 1-planar graphs (graphs drawn with at most one simple crossing per edge) . If both the vertices and the faces of a planar graph are colored, in such a way that no two adjacent vertices, faces, or vertex-face pair have the same color, then again at most six colors are needed . For graphs whose vertices are represented as pairs of points on two distinct surfaces, with edges drawn as non-crossing curves on one of the two surfaces, the chromatic number can be at least 9 and is at most 12, but more precise bounds are not known; this is Gerhard Ringel's Earth–Moon problem. Solid regions There is no obvious extension of the coloring result to three-dimensional solid regions. By using a set of n flexible rods, one can arrange that every rod touches every other rod. The set would then require n colors, or n+1 including the empty space that also touches every rod. The number n can be taken to be any integer, as large as desired. Such examples were known to Fredrick Guthrie in 1880. Even for axis-parallel cuboids (considered to be adjacent when two cuboids share a two-dimensional boundary area), an unbounded number of colors may be necessary. Relation to other areas of mathematics Dror Bar-Natan gave a statement concerning Lie algebras and Vassiliev invariants which is equivalent to the four color theorem. Use outside of mathematics Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to cartographers. According to an article by the math historian Kenneth May, "Maps utilizing only four colors are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property". The theorem also does not guarantee the usual cartographic requirement that non-contiguous regions of the same country (such as the exclave Alaska and the rest of the United States) be colored identically. Because the four-color theorem does not apply when the regions on the map are not contiguous, it also does not apply to the world map. On the world map, the ocean, Belgium, Germany, the Netherlands, and France all border each other because the Netherlands borders France on the island of Saint Martin. This is the only counterexample. See also Apollonian network Five color theorem Graph coloring Grötzsch's theorem: triangle-free planar graphs are 3-colorable. Hadwiger–Nelson problem: how many colors are needed to color the plane so that no two points at unit distance apart have the same color? Notes References . . . External links List of generalizations of the four color theorem on MathOverflow Computer-assisted proofs Graph coloring Planar graphs Theorems in graph theory
Four color theorem
[ "Mathematics" ]
5,566
[ "Graph coloring", "Statements about planar graphs", "Computer-assisted proofs", "Planar graphs", "Graph theory", "Theorems in discrete mathematics", "Mathematical relations", "Planes (geometry)", "Theorems in graph theory" ]
10,969
https://en.wikipedia.org/wiki/Field-programmable%20gate%20array
A field-programmable gate array (FPGA) is a type of configurable integrated circuit that can be repeatedly programmed after manufacturing. FPGAs are a subset of logic devices referred to as programmable logic devices (PLDs). They consist of an array of programmable logic blocks with a connecting grid, that can be configured "in the field" to interconnect with other logic blocks to perform various digital functions. FPGAs are often used in limited (low) quantity production of custom-made products, and in research and development, where the higher cost of individual FPGAs is not as important, and where creating and manufacturing a custom circuit wouldn't be feasible. Other applications for FPGAs include the telecommunications, automotive, aerospace, and industrial sectors, which benefit from their flexibility, high signal processing speed, and parallel processing abilities. A FPGA configuration is generally written using a hardware description language (HDL) e.g. VHDL, similar to the ones used for application-specific integrated circuits (ASICs). Circuit diagrams were formerly used to write the configuration. The logic blocks of an FPGA can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more sophisticated blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software. FPGAs also have a role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture. FPGAs are also commonly used during the development of ASICs to speed up the simulation process. History The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration. Xilinx produced the first commercially viable field-programmable gate array in 1985the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs). In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992. Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (later Microsemi, now Microchip) was serving about 18 percent of the market. The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications. By 2013, Altera (31 percent), Xilinx (36 percent) and Actel (10 percent) together represented approximately 77 percent of the FPGA market. Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform. Growth The following timelines indicate progress in different aspects of FPGA design. Gates 1987: 9,000 gates, Xilinx 1992: 600,000, Naval Surface Warfare Department Early 2000s: millions 2013: 50 million, Xilinx Market size 1985: First commercial FPGA : Xilinx XC2064 1987: $14 million : >$385 million 2005: $1.9 billion 2010 estimates: $2.75 billion 2013: $5.4 billion 2020 estimate: $9.8 billion 2030 estimate: $23.34 billion Design starts A design start is a new custom design for implementation on an FPGA. 2005: 80,000 2008: 90,000 Design Contemporary FPGAs have ample logic gates and RAM blocks to implement complex digital computations. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications. As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning helps resource allocation within FPGAs to meet these timing constraints. Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin. This allows the user to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillator driver circuitry, on-chip RC oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few mixed signal FPGAs have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks, allowing them to operate as a system on a chip (SoC). Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric. Logic blocks The most common FPGA architecture consists of an array of logic blocks called configurable logic blocks (CLBs) or logic array blocks (LABs) (depending on vendor), I/O pads, and routing channels. Generally, all the routing channels have the same width (number of signals). Multiple I/O pads may fit into the height of one row or the width of one column in the array. "An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing channels needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing channels increase the cost (and decrease the performance) of the FPGA without providing any benefit, FPGA manufacturers try to provide just enough channels so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs." In general, a logic block consists of a few logical cells. A typical cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop. The LUT might be split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the first multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be either synchronous or asynchronous, depending on the programming of the third mux. In practice, the entire adder or parts of it are stored as functions into the LUTs in order to save space. Hard blocks Modern FPGA families expand upon the above capabilities to include higher-level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased performance compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high-speed I/O logic and embedded memories. Higher-end FPGAs can contain high-speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI or PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high-performance signal conditioning circuitry along with high-speed serializers and deserializers, components that cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA. Soft core An alternate approach to using hard macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at run time, which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip. Integration In 2012 the coarse-grained architectural approach was taken a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete system on a programmable chip. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 all Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric, or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters in their flash memory-based FPGA fabric. Clocking Most of the logic inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset, typically implemented as an H tree, so they can be delivered with minimal skew. FPGAs may contain analog phase-locked loop or delay-locked loop components to synthesize new clock frequencies and manage jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. Some FPGAs contain dual port RAM blocks that are capable of working with different clocks, aiding in the construction of building FIFOs and dual port buffers that bridge clock domains. 3D architectures To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies. Xilinx's approach stacks several (three or four) active FPGA dies side by side on a silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA. Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other dies and technologies to the FPGA using Intel's embedded multi_die interconnect bridge (EMIB) technology. Programming To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules. Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place and route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the results using timing analysis, simulation, and other verification and validation techniques. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA via a serial interface (JTAG) or to an external memory device such as an EEPROM. The most common HDLs are VHDL and Verilog. National Instruments' LabVIEW graphical programming language (sometimes referred to as G) has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog has a C-like syntax, unlike VHDL. To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license). Such designs are known as open-source hardware. In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally, the design is laid out in the FPGA at which point propagation delay values can be back-annotated onto the netlist, and the simulation can be run again with these values. More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language. For further information, see high-level synthesis and C to HDL. Most FPGAs rely on an SRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example, flash memory or EEPROM devices may load contents into internal SRAM that controls routing and logic. The SRAM approach is based on CMOS. Rarer alternatives to the SRAM approach include: Fuse: one-time programmable. Bipolar. Obsolete. Antifuse: one-time programmable. CMOS. Examples: Actel SX and Axcelerator families; Quicklogic Eclipse II family. PROM: programmable read-only memory technology. One-time programmable because of plastic packaging. Obsolete. EPROM: erasable programmable read-only memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete. EEPROM: electrically erasable programmable read-only memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS. Flash: flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is, therefore, less expensive to manufacture. CMOS. Example: Actel ProASIC family. Manufacturers In 2016, long-time industry rivals Xilinx (now part of AMD) and Altera (now part of İntel) were the FPGA market leaders. At that time, they controlled nearly 90 percent of the market. Both Xilinx (now AMD) and Altera (now Intel) provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs. In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications. On March 24, 2015, Tabula officially shut down. On June 1, 2015, Intel announced it would acquire Altera for approximately US$16.7 billion and completed the acquisition on December 30, 2015. On October 27, 2020, AMD announced it would acquire Xilinx and completed the acquisition valued at about US$50 billion in February 2022. In February 2024 Altera became independent of Intel again. Other manufacturers include: Achronix, manufacturing SRAM based FPGAs with 1.5 GHz fabric speed Altium, provides system-on-FPGA hardware-software design environment. Cologne Chip, German Government backed designer and producer of FPGAs Efinix offers small to medium-sized FPGAs. They combine logic and routing interconnects into a configurable XLR cell. GOWIN Semiconductors, manufacturing small and medium-sized SRAM and Flash-based FPGAs. They also offer pin-compatible replacements for a few Xilinx, Altera and Lattice products. Lattice Semiconductor manufactures low-power SRAM-based FPGAs featuring integrated configuration flash, instant-on and live reconfiguration SiliconBlue Technologies provides extremely low-power SRAM-based FPGAs with optional integrated nonvolatile configuration memory; acquired by Lattice in 2011 Microchip: Microsemi (previously Actel), producing antifuse, flash-based, mixed-signal FPGAs; acquired by Microchip in 2018 Atmel, a second source of some Altera-compatible devices; also FPSLIC mentioned above; acquired by Microchip in 2016 QuickLogic manufactures ultra-low-power sensor hubs, extremely-low-powered, low-density SRAM-based FPGAs, with display bridges MIPI and RGB inputs; MIPI, RGB and LVDS outputs. Applications An FPGA can be used to solve any problem which is computable. FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. But their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes. FPGAs were originally introduced as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications that had traditionally been the sole reserve of digital signal processors (DSPs) began to use FPGAs instead. The evolution of FPGAs has motivated an increase in the use of these devices, whose architecture allows the development of hardware solutions optimized for complex tasks, such as 3D MRI image segmentation, 3D discrete wavelet transform, tomographic image reconstruction, or PET/MRI systems. The developed solutions can perform intensive computation tasks with parallel processing, are dynamically reprogrammable, and have a low cost, all while meeting the hard real-time requirements associated with medical imaging. Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a general-purpose processor. The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014. , FPGAs are seeing increased use as AI accelerators including Microsoft's Project Catapult and for accelerating artificial neural networks for machine learning applications. Originally, FPGAs were reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. Often a custom-made chip would be cheaper if made in larger quantities, but FPGAs may be chosen to quickly bring a product to market. By 2017, new cost and performance dynamics broadened the range of viable applications. Other uses for FPGAs include: Space (with radiation hardening) Hardware security modules High-speed financial transactions Retrocomputing (e.g. the MARS and MiSTer FPGA projects) Usage by United States military FPGAs play a crucial role in modern military communications, especially in systems like the Joint Tactical Radio System (JTRS) and in devices from companies such as Thales and Harris Corporation. Their flexibility and programmability make them ideal for military communications, offering customizable and secure signal processing. In the JTRS, used by the US military, FPGAs provide adaptability and real-time processing, crucial for meeting various communication standards and encryption methods. Thales uses FPGA technology in designing communication devices that fulfill the rigorous demands of military use, including rapid reconfiguration and robust security. Similarly, Harris Corporation, now part of L3Harris Technologies, incorporates FPGAs in its defense and commercial communication solutions, enhancing signal processing and system security. L3Harris Rapidly adaptable standards-compliant radio (RASOR): A modular open system approach (MOSA) solution supporting over 50 data links and waveforms. ASPEN technology platform: Consists of proven hardware modules with programmable software and FPGA options for advanced, configurable data links. AN/PRC-117F(C) radios: Supported the U.S. Air Force Electronic Systems Command, strengthening Harris' role as a full-spectrum communications system supplier. Thales SYNAPS radio damily: Utilizes software-defined radio (SDR) technology, typically involving FPGA for enhanced flexibility and performance. AN/PRC-148 (multiband inter/intra team radio - MBITR): A small-form-factor, multiband, multi-mode SDR used in Afghanistan and Iraq. JTRS Cluster 2 handheld radio: Currently in development, recently completed a successful early operational assessment. Security FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory. Physical unclonable functions (PUFs) are integrated circuits that have their own unique signatures, due to processing, and can also be used to secure FPGAs while taking up very little hardware space. FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications. Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi. With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physical unclonable functions to provide high levels of protection against physical attacks. In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that some FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data. In 2020 a critical vulnerability (named "Starbleed") was discovered in all Xilinx 7series FPGAs that rendered bitstream encryption useless. There is no workaround. Xilinx did not produce a hardware revision. Ultrascale and later devices, already on the market at the time, were not affected. Similar technologies Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs. Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running. The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible but have the advantage of more predictable timing delays and FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always). When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions and are responsible for "booting" the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design. See also FPGA Mezzanine Card CRUVI FPGA daughtercard standard List of HDL simulators References Further reading Mencer, Oskar et al. (2020). "The history, status, and future of FPGAs". Communications of the ACM. ACM. Vol. 63, No. 10. doi:10.1145/3410669 External links Migrating from MCU to FPGA Integrated circuits Semiconductor devices American inventions Hardware acceleration
Field-programmable gate array
[ "Technology", "Engineering" ]
6,345
[ "Hardware acceleration", "Computer engineering", "Gate arrays", "Computer systems", "Integrated circuits" ]
10,971
https://en.wikipedia.org/wiki/Free-running%20sleep
Free-running sleep is a rare sleep pattern whereby the sleep schedule of a person shifts later every day. It occurs as the sleep disorder non-24-hour sleep–wake disorder or artificially as part of experiments used in the study of circadian and other rhythms in biology. Study subjects are shielded from all time cues, often by a constant light protocol, by a constant dark protocol or by the use of light/dark conditions to which the organism cannot entrain such as the ultrashort protocol of one hour dark and two hours light. Also, limited amounts of food may be made available at short intervals so as to avoid entrainment to mealtimes. Subjects are thus forced to live by their internal circadian "clocks". Background The individual's or animal's circadian phase can be known only by the monitoring of some kind of output of the circadian system, the internal "body clock". The researcher can precisely determine, for example, the daily cycles of gene activity, body temperature, blood pressure, hormone secretion and/or sleep and activity/alertness. Alertness in humans can be determined by many kinds of verbal and non-verbal tests, whereas alertness in animals can usually be assessed by observing physical activity (for example, of wheel-running in rodents). When animals or people free-run, experiments can be done to see what sort of signals, known as zeitgebers, are effective in entrainment. Also, much work has been done to see how long or short a circadian cycle can be entrained to various organisms. For example, some animals can be entrained to a 22-hour day, but they can not be entrained to a 20-hour day. In recent studies funded by the U.S. space industry, it has been shown that most humans can be entrained to a 23.5-hour day and to a 24.65-hour day. The effect of unintended time cues is called masking and can totally confound experimental results. Examples of masking are morning rush traffic audible to the subjects, or researchers or maintenance staff visiting subjects on a regular schedule. In humans Non-24-hour sleep–wake disorder, also referred to as free-running disorder (FRD) or Non-24, is one of the circadian rhythm sleep disorders in humans. It affects more than half of people who are totally blind and a smaller number of sighted individuals. Among blind people, the cause is the inability to register, and therefore to entrain to, light cues. The many blind people who do entrain to the 24-hour light/dark cycle have eyes with functioning retinas including operative non-visual light-sensitive cells, ipRGCs. These ganglion cells, which contain melanopsin, convey their signals to the "circadian clock" via the retinohypothalamic tract (branching off from the optic nerve), linking the retina to the pineal gland. Among sighted individuals, Non-24 usually first appears in the teens or early twenties. As with delayed sleep phase disorder (DSPS or DSPD), in the absence of neurological damage due to trauma or stroke, cases almost never appear after the age of 30. Non-24 affects more sighted males than sighted females. A quarter of sighted individuals with Non-24 also have an associated psychiatric condition, and a quarter of them have previously shown symptoms of DSPS. See also Circadian rhythm Circadian rhythm sleep disorder References External links A collection of articles about sleep by Piotr A. Wozniak, July 2000 Sleep Circadian rhythm
Free-running sleep
[ "Biology" ]
745
[ "Behavior", "Sleep", "Circadian rhythm" ]
10,975
https://en.wikipedia.org/wiki/Fatty%20acid
In chemistry, particularly in biochemistry, a fatty acid is a carboxylic acid with an aliphatic chain, which is either saturated or unsaturated. Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. Fatty acids are a major component of the lipids (up to 70% by weight) in some species such as microalgae but in some other organisms are not found in their standalone form, but instead exist as three main classes of esters: triglycerides, phospholipids, and cholesteryl esters. In any of these forms, fatty acids are both important dietary sources of fuel for animals and important structural components for cells. History The concept of fatty acid (acide gras) was introduced in 1813 by Michel Eugène Chevreul, though he initially used some variant terms: graisse acide and acide huileux ("acid fat" and "oily acid"). Types of fatty acids Fatty acids are classified in many ways: by length, by saturation vs unsaturation, by even vs odd carbon content, and by linear vs branched. Length of fatty acids Short-chain fatty acids (SCFAs) are fatty acids with aliphatic tails of five or fewer carbons (e.g. butyric acid). Medium-chain fatty acids (MCFAs) are fatty acids with aliphatic tails of 6 to 12 carbons, which can form medium-chain triglycerides. Long-chain fatty acids (LCFAs) are fatty acids with aliphatic tails of 13 to 21 carbons. Very long chain fatty acids (VLCFAs) are fatty acids with aliphatic tails of 22 or more carbons. Saturated fatty acids Saturated fatty acids have no C=C double bonds. They have the formula CH(CH)COOH, where n is some positive integer. An important saturated fatty acid is stearic acid (n = 16), which when neutralized with sodium hydroxide is the most common form of soap. Unsaturated fatty acids Unsaturated fatty acids have one or more C=C double bonds. The C=C double bonds can give either cis or trans isomers. cis A cis configuration means that the two hydrogen atoms adjacent to the double bond stick out on the same side of the chain. The rigidity of the double bond freezes its conformation and, in the case of the cis isomer, causes the chain to bend and restricts the conformational freedom of the fatty acid. The more double bonds the chain has in the cis configuration, the less flexibility it has. When a chain has many cis bonds, it becomes quite curved in its most accessible conformations. For example, oleic acid, with one double bond, has a "kink" in it, whereas linoleic acid, with two double bonds, has a more pronounced bend. α-Linolenic acid, with three double bonds, favors a hooked shape. The effect of this is that, in restricted environments, such as when fatty acids are part of a phospholipid in a lipid bilayer or triglycerides in lipid droplets, cis bonds limit the ability of fatty acids to be closely packed, and therefore can affect the melting temperature of the membrane or of the fat. Cis unsaturated fatty acids, however, increase cellular membrane fluidity, whereas trans unsaturated fatty acids do not. trans A trans configuration, by contrast, means that the adjacent two hydrogen atoms lie on opposite sides of the chain. As a result, they do not cause the chain to bend much, and their shape is similar to straight saturated fatty acids. In most naturally occurring unsaturated fatty acids, each double bond has three (n−3), six (n−6), or nine (n−9) carbon atoms after it, and all double bonds have a cis configuration. Most fatty acids in the trans configuration (trans fats) are not found in nature and are the result of human processing (e.g., hydrogenation). Some trans fatty acids also occur naturally in the milk and meat of ruminants (such as cattle and sheep). They are produced, by fermentation, in the rumen of these animals. They are also found in dairy products from milk of ruminants, and may be also found in breast milk of women who obtained them from their diet. The geometric differences between the various types of unsaturated fatty acids, as well as between saturated and unsaturated fatty acids, play an important role in biological processes, and in the construction of biological structures (such as cell membranes). Even- vs odd-chained fatty acids Most fatty acids are even-chained, e.g. stearic (C18) and oleic (C18), meaning they are composed of an even number of carbon atoms. Some fatty acids have odd numbers of carbon atoms; they are referred to as odd-chained fatty acids (OCFA). The most common OCFA are the saturated C15 and C17 derivatives, pentadecanoic acid and heptadecanoic acid respectively, which are found in dairy products. On a molecular level, OCFAs are biosynthesized and metabolized slightly differently from the even-chained relatives. Branching Most common fatty acids are straight-chain compounds, with no additional carbon atoms bonded as side groups to the main hydrocarbon chain. Branched-chain fatty acids contain one or more methyl groups bonded to the hydrocarbon chain. Nomenclature Carbon atom numbering Most naturally occurring fatty acids have an unbranched chain of carbon atoms, with a carboxyl group (–COOH) at one end, and a methyl group (–CH3) at the other end. The position of each carbon atom in the backbone of a fatty acid is usually indicated by counting from 1 at the −COOH end. Carbon number x is often abbreviated C-x (or sometimes Cx), with x = 1, 2, 3, etc. This is the numbering scheme recommended by the IUPAC. Another convention uses letters of the Greek alphabet in sequence, starting with the first carbon after the carboxyl group. Thus carbon α (alpha) is C-2, carbon β (beta) is C-3, and so forth. Although fatty acids can be of diverse lengths, in this second convention the last carbon in the chain is always labelled as ω (omega), which is the last letter in the Greek alphabet. A third numbering convention counts the carbons from that end, using the labels "ω", "ω−1", "ω−2". Alternatively, the label "ω−x" is written "n−x", where the "n" is meant to represent the number of carbons in the chain. In either numbering scheme, the position of a double bond in a fatty acid chain is always specified by giving the label of the carbon closest to the carboxyl end. Thus, in an 18 carbon fatty acid, a double bond between C-12 (or ω−6) and C-13 (or ω−5) is said to be "at" position C-12 or ω−6. The IUPAC naming of the acid, such as "octadec-12-enoic acid" (or the more pronounceable variant "12-octadecanoic acid") is always based on the "C" numbering. The notation Δx,y,... is traditionally used to specify a fatty acid with double bonds at positions x,y,.... (The capital Greek letter "Δ" (delta) corresponds to Roman "D", for Double bond). Thus, for example, the 20-carbon arachidonic acid is Δ5,8,11,14, meaning that it has double bonds between carbons 5 and 6, 8 and 9, 11 and 12, and 14 and 15. In the context of human diet and fat metabolism, unsaturated fatty acids are often classified by the position of the double bond closest between to the ω carbon (only), even in the case of multiple double bonds such as the essential fatty acids. Thus linoleic acid (18 carbons, Δ9,12), γ-linolenic acid (18-carbon, Δ6,9,12), and arachidonic acid (20-carbon, Δ5,8,11,14) are all classified as "ω−6" fatty acids; meaning that their formula ends with –CH=CH–––––. Fatty acids with an odd number of carbon atoms are called odd-chain fatty acids, whereas the rest are even-chain fatty acids. The difference is relevant to gluconeogenesis. Naming of fatty acids The following table describes the most common systems of naming fatty acids. Free fatty acids When circulating in the plasma (plasma fatty acids), not in their ester, fatty acids are known as non-esterified fatty acids (NEFAs) or free fatty acids (FFAs). FFAs are always bound to a transport protein, such as albumin. FFAs also form from triglyceride food oils and fats by hydrolysis, contributing to the characteristic rancid odor. An analogous process happens in biodiesel with risk of part corrosion. Production Industrial Fatty acids are usually produced industrially by the hydrolysis of triglycerides, with the removal of glycerol (see oleochemicals). Phospholipids represent another source. Some fatty acids are produced synthetically by hydrocarboxylation of alkenes. By animals In animals, fatty acids are formed from carbohydrates predominantly in the liver, adipose tissue, and the mammary glands during lactation. Carbohydrates are converted into pyruvate by glycolysis as the first important step in the conversion of carbohydrates into fatty acids. Pyruvate is then decarboxylated to form acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to the mitochondrion as malate. The cytosolic acetyl-CoA is carboxylated by acetyl-CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids. Malonyl-CoA is then involved in a repeating series of reactions that lengthens the growing fatty acid chain by two carbons at a time. Almost all natural fatty acids, therefore, have even numbers of carbon atoms. When synthesis is complete the free fatty acids are nearly always combined with glycerol (three fatty acids to one glycerol molecule) to form triglycerides, the main storage form of fatty acids, and thus of energy in animals. However, fatty acids are also important components of the phospholipids that form the phospholipid bilayers out of which all the membranes of the cell are constructed (the cell wall, and the membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus). The "uncombined fatty acids" or "free fatty acids" found in the circulation of animals come from the breakdown (or lipolysis) of stored triglycerides. Because they are insoluble in water, these fatty acids are transported bound to plasma albumin. The levels of "free fatty acids" in the blood are limited by the availability of albumin binding sites. They can be taken up from the blood by all cells that have mitochondria (with the exception of the cells of the central nervous system). Fatty acids can only be broken down in mitochondria, by means of beta-oxidation followed by further combustion in the citric acid cycle to CO and water. Cells in the central nervous system, although they possess mitochondria, cannot take free fatty acids up from the blood, as the blood–brain barrier is impervious to most free fatty acids, excluding short-chain fatty acids and medium-chain fatty acids. These cells have to manufacture their own fatty acids from carbohydrates, as described above, in order to produce and maintain the phospholipids of their cell membranes, and those of their organelles. Variation between animal species Studies on the cell membranes of mammals and reptiles discovered that mammalian cell membranes are composed of a higher proportion of polyunsaturated fatty acids (DHA, omega−3 fatty acid) than reptiles. Studies on bird fatty acid composition have noted similar proportions to mammals but with 1/3rd less omega−3 fatty acids as compared to omega−6 for a given body size. This fatty acid composition results in a more fluid cell membrane but also one that is permeable to various ions ( & ), resulting in cell membranes that are more costly to maintain. This maintenance cost has been argued to be one of the key causes for the high metabolic rates and concomitant warm-bloodedness of mammals and birds. However polyunsaturation of cell membranes may also occur in response to chronic cold temperatures as well. In fish increasingly cold environments lead to increasingly high cell membrane content of both monounsaturated and polyunsaturated fatty acids, to maintain greater membrane fluidity (and functionality) at the lower temperatures. Fatty acids in dietary fats The following table gives the fatty acid, vitamin E and cholesterol composition of some common dietary fats. Reactions of fatty acids Fatty acids exhibit reactions like other carboxylic acids, i.e. they undergo esterification and acid-base reactions. Transesterification All fatty acids transesterify. Typically, transesterification is practiced in the conversion of fats to fatty acid methyl esters. These esters are used for biodiesel. They are also hydrogenated to give fatty alcohols. Even vinyl esters can be made by transesterification using vinyl acetate. Acid-base reactions Fatty acids do not show a great variation in their acidities, as indicated by their respective pKa. Nonanoic acid, for example, has a pK of 4.96, being only slightly weaker than acetic acid (4.76). As the chain length increases, the solubility of the fatty acids in water decreases, so that the longer-chain fatty acids have minimal effect on the pH of an aqueous solution. Near neutral pH, fatty acids exist at their conjugate bases, i.e. oleate, etc. Solutions of fatty acids in ethanol can be titrated with sodium hydroxide solution using phenolphthalein as an indicator. This analysis is used to determine the free fatty acid content of fats; i.e., the proportion of the triglycerides that have been hydrolyzed. Neutralization of fatty acids, like saponification, is a widely practiced route to metallic soaps. Hydrogenation and hardening Hydrogenation of unsaturated fatty acids is widely practiced. Typical conditions involve 2.0–3.0 MPa of H pressure, 150 °C, and nickel supported on silica as a catalyst. This treatment affords saturated fatty acids. The extent of hydrogenation is indicated by the iodine number. Hydrogenated fatty acids are less prone toward rancidification. Since the saturated fatty acids are higher melting than the unsaturated precursors, the process is called hardening. Related technology is used to convert vegetable oils into margarine. The hydrogenation of triglycerides (vs fatty acids) is advantageous because the carboxylic acids degrade the nickel catalysts, affording nickel soaps. During partial hydrogenation, unsaturated fatty acids can be isomerized from cis to trans configuration. More forcing hydrogenation, i.e. using higher pressures of H and higher temperatures, converts fatty acids into fatty alcohols. Fatty alcohols are, however, more easily produced from simpler fatty acid esters, like the fatty acid methyl esters ("FAME"s). Chemistry of saturated vs unsaturated acids The reactivity of saturated fatty acids is usually associated with the carboxylic acid or the adjacent methylene group By conversion to their acid chlorides, they can be converted to the symmetrical fatty ketone laurone (). Treatment with sulfur trioxide gives the α-sulfonic acids. The reactivity of unsaturated fatty acids is often dominated by the site of unsaturation. These reactions are the basis of ozonolysis, hydrogenation, and the iodine number. Ozonolysis (degradation by ozone) is practiced in the production of azelaic acid ((CH)(COH)) from oleic acid. Circulation Digestion and intake Short- and medium-chain fatty acids are absorbed directly into the blood via intestine capillaries and travel through the portal vein just as other absorbed nutrients do. However, long-chain fatty acids are not directly released into the intestinal capillaries. Instead they are absorbed into the fatty walls of the intestine villi and reassemble again into triglycerides. The triglycerides are coated with cholesterol and protein (protein coat) into a compound called a chylomicron. From within the cell, the chylomicron is released into a lymphatic capillary called a lacteal, which merges into larger lymphatic vessels. It is transported via the lymphatic system and the thoracic duct up to a location near the heart (where the arteries and veins are larger). The thoracic duct empties the chylomicrons into the bloodstream via the left subclavian vein. At this point the chylomicrons can transport the triglycerides to tissues where they are stored or metabolized for energy. Metabolism Fatty acids are broken down to CO and water by the intra-cellular mitochondria through beta oxidation and the citric acid cycle. In the final step (oxidative phosphorylation), reactions with oxygen release a lot of energy, captured in the form of large quantities of ATP. Many cell types can use either glucose or fatty acids for this purpose, but fatty acids release more energy per gram. Fatty acids (provided either by ingestion or by drawing on triglycerides stored in fatty tissues) are distributed to cells to serve as a fuel for muscular contraction and general metabolism. Essential fatty acids Fatty acids that are required for good health but cannot be made in sufficient quantity from other substrates, and therefore must be obtained from food, are called essential fatty acids. There are two series of essential fatty acids: one has a double bond three carbon atoms away from the methyl end; the other has a double bond six carbon atoms away from the methyl end. Humans lack the ability to introduce double bonds in fatty acids beyond carbons 9 and 10, as counted from the carboxylic acid side. Two essential fatty acids are linoleic acid (LA) and alpha-linolenic acid (ALA). These fatty acids are widely distributed in plant oils. The human body has a limited ability to convert ALA into the longer-chain omega-3 fatty acids — eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which can also be obtained from fish. Omega−3 and omega−6 fatty acids are biosynthetic precursors to endocannabinoids with antinociceptive, anxiolytic, and neurogenic properties. Distribution Blood fatty acids adopt distinct forms in different stages in the blood circulation. They are taken in through the intestine in chylomicrons, but also exist in very low density lipoproteins (VLDL) and low density lipoproteins (LDL) after processing in the liver. In addition, when released from adipocytes, fatty acids exist in the blood as free fatty acids. It is proposed that the blend of fatty acids exuded by mammalian skin, together with lactic acid and pyruvic acid, is distinctive and enables animals with a keen sense of smell to differentiate individuals. Skin The stratum corneum the outermost layer of the epidermis is composed of terminally differentiated and enucleated corneocytes within a lipid matrix. Together with cholesterol and ceramides, free fatty acids form a water-impermeable barrier that prevents evaporative water loss. Generally, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (about 50% by weight), cholesterol (25%), and free fatty acids (15%). Saturated fatty acids 16 and 18 carbons in length are the dominant types in the epidermis, while unsaturated fatty acids and saturated fatty acids of various other lengths are also present. The relative abundance of the different fatty acids in the epidermis is dependent on the body site the skin is covering. There are also characteristic epidermal fatty acid alterations that occur in psoriasis, atopic dermatitis, and other inflammatory conditions. Analysis The chemical analysis of fatty acids in lipids typically begins with an interesterification step that breaks down their original esters (triglycerides, waxes, phospholipids etc.) and converts them to methyl esters, which are then separated by gas chromatography or analyzed by gas chromatography and mid-infrared spectroscopy. Separation of unsaturated isomers is possible by silver ion complemented thin-layer chromatography. Other separation techniques include high-performance liquid chromatography (with short columns packed with silica gel with bonded phenylsulfonic acid groups whose hydrogen atoms have been exchanged for silver ions). The role of silver lies in its ability to form complexes with unsaturated compounds. Industrial uses Fatty acids are mainly used in the production of soap, both for cosmetic purposes and, in the case of metallic soaps, as lubricants. Fatty acids are also converted, via their methyl esters, to fatty alcohols and fatty amines, which are precursors to surfactants, detergents, and lubricants. Other applications include their use as emulsifiers, texturizing agents, wetting agents, anti-foam agents, or stabilizing agents. Esters of fatty acids with simpler alcohols (such as methyl-, ethyl-, n-propyl-, isopropyl- and butyl esters) are used as emollients in cosmetics and other personal care products and as synthetic lubricants. Esters of fatty acids with more complex alcohols, such as sorbitol, ethylene glycol, diethylene glycol, and polyethylene glycol are consumed in food, or used for personal care and water treatment, or used as synthetic lubricants or fluids for metal working. See also Fatty acid synthase Fatty acid synthesis Fatty aldehyde List of saturated fatty acids List of unsaturated fatty acids List of carboxylic acids Vegetable oil Lactobacillic acid References External links Lipid Library Prostaglandins, Leukotrienes & Essential Fatty Acids journal Fatty blood acids Commodity chemicals E-number additives Edible oil chemistry
Fatty acid
[ "Chemistry" ]
5,002
[ "Edible oil chemistry", "Commodity chemicals", "Food chemistry", "Products of chemical industry" ]
10,983
https://en.wikipedia.org/wiki/First-order%20logic
First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x, if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic. A theory about a topic, such as set theory, a theory for groups, or a formal theory of arithmetic, is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic. The term "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted. In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets. There are many deductive systems for first-order logic which are both sound, i.e. all provable statements are true in all models; and complete, i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem. First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic. The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). Introduction While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate evaluates to true or false for an entity or entities in the domain of discourse. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q. They are not viewed as an application of a predicate, such as , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form for some individual , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic. The truth of a formula such as "x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. Relationships between predicates can be stated using logical connectives. For example, the first-order formula "if x is a philosopher, then x is a scholar", is a conditional statement with "x is a philosopher" as its hypothesis, and "x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value. Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x, if x is a philosopher, then x is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x. The negation of the sentence "For every x, if x is a philosopher, then x is a scholar" is logically equivalent to the sentence "There exists x such that x is a philosopher and x is not a scholar". The existential quantifier "there exists" expresses the idea that the claim "x is a philosopher and x is not a scholar" holds for some choice of x. The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic." It is true, as witnessed by Plato in that text. There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions. Syntax Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of well-formed expressions: terms, which intuitively represent objects, and formulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of symbols, where all the symbols together form the alphabet of the language. Alphabet As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol always represents "and"; it is never interpreted as "or", which is represented by the logical symbol . However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate depending on the interpretation at hand. Logical symbols Logical symbols are a set of characters that vary by author, but usually include the following: Quantifier symbols: for universal quantification, and for existential quantification Logical connectives: for conjunction, for disjunction, for implication, for biconditional, for negation. Some authors use Cpq instead of and Epq instead of , especially in contexts where → is used for other purposes. Moreover, the horseshoe may replace ; the triple-bar may replace ; a tilde (), Np, or Fp may replace ; a double bar , , or Apq may replace ; and an ampersand , Kpq, or the middle dot may replace , especially if these symbols are not available for technical reasons. (The aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.) Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts are often used to distinguish variables: An equality symbol (sometimes, identity symbol) (see below). Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices. Other logical symbols include the following: Truth constants: T, V, or for "true" and F, O, or for "false" (V and O are from Polish notation). Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers. Additional logical connectives such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq. Non-logical symbols Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes: For every integer n ≥ 0, there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n, there is an infinite supply of them: Pn0, Pn1, Pn2, Pn3, ... For every integer n ≥ 0, there are infinitely many n-ary function symbols: f n0, f n1, f n2, f n3, ... When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted. In this traditional approach, there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem. Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics. In this approach, every non-logical symbol is of one of the following types: A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters such as P, Q and R. Examples: In P(x), P is a predicate symbol of valence 1. One possible interpretation is "x is a man". In Q(x,y), Q is a predicate symbol of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y". Relations of valence 0 can be identified with propositional variables, which can stand for any statement. One possible interpretation of R is "Socrates is a man". A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase roman letters such as f, g and h. Examples: f(x) may be interpreted as "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y". Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, it may stand for the empty set. The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. Formation rules The formation rules define the terms and formulas of first-order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms. Terms The set of terms is inductively defined by the following rules: Variables. Any variable symbol is a term. Functions. If f is an n-ary function symbol, and t1, ..., tn are terms, then f(t1,...,tn) is a term. In particular, symbols denoting individual constants are nullary function symbols, and thus are terms. Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. Formulas The set of formulas (also called well-formed formulas or WFFs) is inductively defined by the following rules: Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula. Equality. If the equality symbol is considered part of logic, and t1 and t2 are terms, then t1 = t2 is a formula. Negation. If is a formula, then is a formula. Binary connectives. If and are formulas, then () is a formula. Similar rules apply to other binary logical connectives. Quantifiers. If is a formula and x is a variable, then (for all x, holds) and (there exists x such that ) are formulas. Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas. For example: is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. Notational conventions For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is: is evaluated first and are evaluated next Quantifiers are evaluated next is evaluated last. Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula: might be written as: In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation, cf. also term structure vs. representation. The definitions above use infix notation for binary connectives such as . A less common convention is Polish notation, in which one writes , and so on in front of their arguments rather than between them. This convention is advantageous in that it allows all punctuation symbols to be discarded. As such, Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read. In Polish notation, the formula: becomes Free and bound variables In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x, each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears.p.297 Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either or . Finally, x is bound in φ if all occurrences of x in φ are bound.pp.142--143 Intuitively, a variable symbol is free in a formula if at no point is it quantified:pp.142--143 in , the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows. Atomic formulas If φ is an atomic formula, then x occurs free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula. Negation x occurs free in ¬φ if and only if x occurs free in φ. x occurs bound in ¬φ if and only if x occurs bound in φ Binary connectives x occurs free in (φ → ψ) if and only if x occurs free in either φ or ψ. x occurs bound in (φ → ψ) if and only if x occurs bound in either φ or ψ. The same rule applies to any other binary connective in place of →. Quantifiers x occurs free in , if and only if x occurs free in φ and x is a different symbol from y. Also, x occurs bound in , if and only if x is y or x occurs bound in φ. The same rule holds with in place of . For example, in , x and y occur only bound, z occurs only free, and w is neither because it does not occur in the formula. Free and bound variables of a formula need not be disjoint sets: in the formula , the first occurrence of x, as argument of P, is free while the second one, as argument of Q, is bound. A formula in first-order logic with no free variable occurrences is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence will be either true or false in a given interpretation. Example: ordered abelian groups In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z. The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z  ≤  x + y. The expression is a formula, which is usually written as This formula has one free variable, z. The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written Semantics An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) First-order structures The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a domain of discourse D and an interpretation function mapping non-logical symbols to predicates, functions, and constants. The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers. Non-logical symbols are interpreted as follows: The interpretation of an n-ary function symbol is a function from Dn to D. For example, if the domain of discourse is the set of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function which, in this interpretation, is addition. The interpretation of a constant symbol (a function symbol of arity 0) is a function from D0 (a set whose only member is the empty tuple) to D, which can be simply identified with an object in D. For example, an interpretation may assign the value to the constant symbol . The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of D, giving the arguments for which the predicate is true. For example, an interpretation of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than its second argument. Equivalently, predicate symbols may be assigned Boolean-valued functions from Dn to . Evaluation of truth values A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as . The truth value of this formula changes depending on the values that x and y denote. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: Variables. Each variable x evaluates to μ(x) Functions. Given terms that have been evaluated to elements of the domain of discourse, and a n-ary function symbol f, the term evaluates to . Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema. Atomic formulas (1). A formula is associated the value true or false depending on whether , where are the evaluation of the terms and is the interpretation of , which by assumption is a subset of . Atomic formulas (2). A formula is assigned true if and evaluate to the same object of the domain of discourse (see the section on equality below). Logical connectives. A formula in the form , , etc. is evaluated according to the truth table for the connective in question, as in propositional logic. Existential quantifiers. A formula is true according to M and if there exists an evaluation of the variables that differs from at most regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment . This formal definition captures the idea that is true if and only if there is a way to choose a value for x such that φ(x) is satisfied. Universal quantifiers. A formula is true according to M and if φ(x) is true for every pair composed by the interpretation M and some variable assignment that differs from at most on the value of x. This captures the idea that is true if every possible choice of a value for x causes φ(x) to be true. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and if and only if it is true according to M and every other variable assignment . There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: Existential quantifiers (alternate). A formula is true according to M if there is some d in the domain of discourse such that holds. Here is the result of substituting cd for every free occurrence of x in φ. Universal quantifiers (alternate). A formula is true according to M if, for every d in the domain of discourse, is true according to M. This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. Validity, satisfiability, and logical consequence If a sentence φ evaluates to true under a given interpretation M, one says that M satisfies φ; this is denoted . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol from model theory, where denotes satisfiability in a model, i.e. "there is a suitable assignment of values in 's domain to variable symbols of ". Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables , ..., is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables , ..., . This has the same effect as saying that a formula φ is satisfied if and only if its universal closure is satisfied. A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic. A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. Algebraizations An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: Cylindric algebra, by Alfred Tarski and colleagues; Polyadic algebra, by Paul Halmos; Predicate functor logic, mainly due to Willard Quine. These algebras are all lattices that properly extend the two-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions. First-order theories, models, and elementary classes A first-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory. Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models. A theory is consistent if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. Empty domains The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: Many common rules of inference are valid only when the domain of discourse is required to be nonempty. One example is the rule stating that implies when x is not a free variable in . This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted. The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains. Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. Deductive systems A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs but are completely formalized unlike natural-language mathematical proofs. A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective. A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. Rules of inference A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is , which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. Hilbert-style systems and natural deduction A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. Sequent calculus The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form: where A1, ..., An, B1, ..., Bk are formulas and the turnstile symbol is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that implies . Tableaux method Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent and children C and D. Resolution The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses and , the conclusion can be obtained. Provable identities Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include: (where must not occur free in ) (where must not occur free in ) Equality and its axioms There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are: Reflexivity. For each variable x, x = x. Substitution for functions. For all variables x and y, and any function symbol f, x = y → f(..., x, ...) = f(..., y, ...). Substitution for formulas. For any variables x and y and any formula φ(z) with a free variable z, then: x = y → (φ(x) → φ(y)). These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula: φ(z): f(..., x, ...) = f(..., z, ...) Then x = y → (f(..., x, ...) = f(..., x, ...) → f(..., x, ...) = f(..., y, ...)). Since x = y is given, and f(..., x, ...) = f(..., x, ...) true by reflexivity, we have f(..., x, ...) = f(..., y, ...) Many other properties of equality are consequences of the axioms above, for example: Symmetry. If x = y then y = x. Transitivity. If x = y and y = z then x = z. First-order logic without equality An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation. When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered. First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. Defining equality within a theory If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. Some theories allow other ad hoc definitions of equality: In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t t ≤ s. In set theory with one relation ∈, one may define s = t to be an abbreviation for . This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, which can be stated as , with an alternative formulation , which says that if sets x and y have the same elements, then they also belong to the same sets. Metalogical properties One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. Completeness and undecidability Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics. The Löwenheim–Skolem theorem The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox. The compactness theorem The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the logic of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as also enjoys compactness. Lindström's theorem Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order logic. A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic. Limitations Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers and . Expressiveness The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. Formalizing natural languages First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages, such as FO(.). Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic". Restrictions, extensions, and variations There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. Restricted languages First-order logic can be studied in languages with fewer logical symbols than were described above: Because can be expressed as , and can be expressed as , either of the two quantifiers and can be dropped. Since can be expressed as and can be expressed as , either or can be dropped. In other words, it is sufficient to have and , or and , as the only logical connectives. Similarly, it is sufficient to have only and as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator. It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol one may use a predicate (interpreted as ) and replace every predicate such as with . A function such as will similarly be replaced by a predicate interpreted as . This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics. Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. Many-sorted logic Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic. When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic. One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols and and the axiom: . Then the elements satisfying are thought of as elements of the first sort, and elements satisfying as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula , one writes: . Additional quantifiers Additional quantifiers can be added to first-order logic. Sometimes it is useful to say that " holds for exactly one x", which can be expressed as . This notation, called uniqueness quantification, may be taken to abbreviate a formula such as . First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others. Bounded quantifiers are often used in the study of set theory or arithmetic. Infinitary logics Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is Lωω. In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as Lκω. For example, Lω1ω permits countable conjunctions and disjunctions. The set of free variables in a formula of Lκω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in Lκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic Lκλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. Non-classical and modal logics Intuitionistic first-order logic uses intuitionistic rather than classical reasoning; for example, ¬¬φ need not be equivalent to φ and ¬ ∀x.φ is in general not equivalent to ∃ x.¬φ . First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain, and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a philosopher, but might have been a mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all. First-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical propositional calculus. Fixpoint logic Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators. Higher-order logics The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus is a legal first-order formula, but is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. Automated theorem proving and formal methods Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see . See also ACL2 — A Computational Logic for Applicative Common Lisp Aristotelian logic Equiconsistency Ehrenfeucht-Fraisse game Extension by definitions Extension (predicate logic) Herbrandization List of logic symbols Lojban Löwenheim number Nonfirstorderizability Prenex normal form Prior Analytics Prolog Relational algebra Relational model Skolem normal form Tarski's World Truth table Type (model theory) Notes References Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer. Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1 Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press) Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, , Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition) Hodges, Wilfrid (2001); "Classical Logic I: First-Order Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell Ebbinghaus, Heinz-Dieter; Flum, Jörg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition, Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Vol.41 of American Mathematical Society colloquium publications, Providence RI: American Mathematical Society, External links Stanford Encyclopedia of Philosophy: Shapiro, Stewart; "Classical Logic". Covers syntax, model theory, and metatheory for first-order logic in the natural deduction style. Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for first-order logic. Metamath: an ongoing online project to reconstruct mathematics as a huge first-order theory, using first-order logic and the axiomatic set theory ZFC. Principia Mathematica modernized. Podnieks, Karl; Introduction to mathematical logic Cambridge Mathematical Tripos notes (typeset by John Fremlin). These notes cover part of a past Cambridge Mathematical Tripos course taught to undergraduate students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and Zorn's Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories. Tree Proof Generator can validate or invalidate formulas of first-order logic through the semantic tableaux method. Systems of formal logic Predicate logic Model theory
First-order logic
[ "Mathematics" ]
14,325
[ "Mathematical logic", "Predicate logic", "Model theory", "Basic concepts in set theory" ]
10,987
https://en.wikipedia.org/wiki/Functor
In mathematics, specifically category theory, a functor is a mapping between categories. Functors were first considered in algebraic topology, where algebraic objects (such as the fundamental group) are associated to topological spaces, and maps between these algebraic objects are associated to continuous maps between spaces. Nowadays, functors are used throughout modern mathematics to relate various categories. Thus, functors are important in all areas within mathematics to which category theory is applied. The words category and functor were borrowed by mathematicians from the philosophers Aristotle and Rudolf Carnap, respectively. The latter used functor in a linguistic context; see function word. Definition Let C and D be categories. A functor F from C to D is a mapping that associates each object in C to an object in D, associates each morphism in C to a morphism in D such that the following two conditions hold: for every object in C, for all morphisms and in C. That is, functors must preserve identity morphisms and composition of morphisms. Covariance and contravariance There are many constructions in mathematics that would be functors but for the fact that they "turn morphisms around" and "reverse composition". We then define a contravariant functor F from C to D as a mapping that associates each object in C with an object in D, associates each morphism in C with a morphism in D such that the following two conditions hold: for every object in C, for all morphisms and in C. Variance of functor (composite) The composite of two functors of the same variance: The composite of two functors of opposite variance: Note that contravariant functors reverse the direction of composition. Ordinary functors are also called covariant functors in order to distinguish them from contravariant ones. Note that one can also define a contravariant functor as a covariant functor on the opposite category . Some authors prefer to write all expressions covariantly. That is, instead of saying is a contravariant functor, they simply write (or sometimes ) and call it a functor. Contravariant functors are also occasionally called cofunctors. There is a convention which refers to "vectors"—i.e., vector fields, elements of the space of sections of a tangent bundle —as "contravariant" and to "covectors"—i.e., 1-forms, elements of the space of sections of a cotangent bundle —as "covariant". This terminology originates in physics, and its rationale has to do with the position of the indices ("upstairs" and "downstairs") in expressions such as for or for In this formalism it is observed that the coordinate transformation symbol (representing the matrix ) acts on the "covector coordinates" "in the same way" as on the basis vectors: —whereas it acts "in the opposite way" on the "vector coordinates" (but "in the same way" as on the basis covectors: ). This terminology is contrary to the one used in category theory because it is the covectors that have pullbacks in general and are thus contravariant, whereas vectors in general are covariant since they can be pushed forward. See also Covariance and contravariance of vectors. Opposite functor Every functor induces the opposite functor , where and are the opposite categories to and . By definition, maps objects and morphisms in the identical way as does . Since does not coincide with as a category, and similarly for , is distinguished from . For example, when composing with , one should use either or . Note that, following the property of opposite category, . Bifunctors and multifunctors A bifunctor (also known as a binary functor) is a functor whose domain is a product category. For example, the Hom functor is of the type . It can be seen as a functor in two arguments; it is contravariant in one argument, covariant in the other. A multifunctor is a generalization of the functor concept to n variables. So, for example, a bifunctor is a multifunctor with . Properties Two important consequences of the functor axioms are: F transforms each commutative diagram in C into a commutative diagram in D; if f is an isomorphism in C, then F(f) is an isomorphism in D. One can compose functors, i.e. if F is a functor from A to B and G is a functor from B to C then one can form the composite functor from A to C. Composition of functors is associative where defined. Identity of composition of functors is the identity functor. This shows that functors can be considered as morphisms in categories of categories, for example in the category of small categories. A small category with a single object is the same thing as a monoid: the morphisms of a one-object category can be thought of as elements of the monoid, and composition in the category is thought of as the monoid operation. Functors between one-object categories correspond to monoid homomorphisms. So in a sense, functors between arbitrary categories are a kind of generalization of monoid homomorphisms to categories with more than one object. Examples Diagram For categories C and J, a diagram of type J in C is a covariant functor . (Category theoretical) presheafFor categories C and J, a J-presheaf on C is a contravariant functor .In the special case when J is Set, the category of sets and functions, D is called a presheaf on C. Presheaves (over a topological space) If X is a topological space, then the open sets in X form a partially ordered set Open(X) under inclusion. Like every partially ordered set, Open(X) forms a small category by adding a single arrow if and only if . Contravariant functors on Open(X) are called presheaves on X. For instance, by assigning to every open set U the associative algebra of real-valued continuous functions on U, one obtains a presheaf of algebras on X. Constant functor The functor which maps every object of C to a fixed object X in D and every morphism in C to the identity morphism on X. Such a functor is called a constant or selection functor. A functor that maps a category to that same category; e.g., polynomial functor. In category C, written 1C or idC, maps an object to itself and a morphism to itself. The identity functor is an endofunctor. Diagonal functor The diagonal functor is defined as the functor from D to the functor category DC which sends each object in D to the constant functor at that object. Limit functor For a fixed index category J, if every functor has a limit (for instance if C is complete), then the limit functor assigns to each functor its limit. The existence of this functor can be proved by realizing that it is the right-adjoint to the diagonal functor and invoking the Freyd adjoint functor theorem. This requires a suitable version of the axiom of choice. Similar remarks apply to the colimit functor (which assigns to every functor its colimit, and is covariant). Power sets functor The power set functor maps each set to its power set and each function to the map which sends to its image . One can also consider the contravariant power set functor which sends to the map which sends to its inverse image For example, if then . Suppose and . Then is the function which sends any subset of to its image , which in this case means , where denotes the mapping under , so this could also be written as . For the other values, Note that consequently generates the trivial topology on . Also note that although the function in this example mapped to the power set of , that need not be the case in general. The map which assigns to every vector space its dual space and to every linear map its dual or transpose is a contravariant functor from the category of all vector spaces over a fixed field to itself. Fundamental group Consider the category of pointed topological spaces, i.e. topological spaces with distinguished points. The objects are pairs , where X is a topological space and x0 is a point in X. A morphism from to is given by a continuous map with . To every topological space X with distinguished point x0, one can define the fundamental group based at x0, denoted . This is the group of homotopy classes of loops based at x0, with the group operation of concatenation. If is a morphism of pointed spaces, then every loop in X with base point x0 can be composed with f to yield a loop in Y with base point y0. This operation is compatible with the homotopy equivalence relation and the composition of loops, and we get a group homomorphism from to . We thus obtain a functor from the category of pointed topological spaces to the category of groups. In the category of topological spaces (without distinguished point), one considers homotopy classes of generic curves, but they cannot be composed unless they share an endpoint. Thus one has the fundamental groupoid instead of the fundamental group, and this construction is functorial. Algebra of continuous functions A contravariant functor from the category of topological spaces (with continuous maps as morphisms) to the category of real associative algebras is given by assigning to every topological space X the algebra C(X) of all real-valued continuous functions on that space. Every continuous map induces an algebra homomorphism by the rule for every φ in C(Y). Tangent and cotangent bundles The map which sends every differentiable manifold to its tangent bundle and every smooth map to its derivative is a covariant functor from the category of differentiable manifolds to the category of vector bundles. Doing this constructions pointwise gives the tangent space, a covariant functor from the category of pointed differentiable manifolds to the category of real vector spaces. Likewise, cotangent space is a contravariant functor, essentially the composition of the tangent space with the dual space above. Group actions/representations Every group G can be considered as a category with a single object whose morphisms are the elements of G. A functor from G to Set is then nothing but a group action of G on a particular set, i.e. a G-set. Likewise, a functor from G to the category of vector spaces, VectK, is a linear representation of G. In general, a functor can be considered as an "action" of G on an object in the category C. If C is a group, then this action is a group homomorphism. Lie algebras Assigning to every real (complex) Lie group its real (complex) Lie algebra defines a functor. Tensor products If C denotes the category of vector spaces over a fixed field, with linear maps as morphisms, then the tensor product defines a functor which is covariant in both arguments. Forgetful functors The functor which maps a group to its underlying set and a group homomorphism to its underlying function of sets is a functor. Functors like these, which "forget" some structure, are termed forgetful functors. Another example is the functor which maps a ring to its underlying additive abelian group. Morphisms in Rng (ring homomorphisms) become morphisms in Ab (abelian group homomorphisms). Free functors Going in the opposite direction of forgetful functors are free functors. The free functor sends every set X to the free group generated by X. Functions get mapped to group homomorphisms between free groups. Free constructions exist for many categories based on structured sets. See free object. Homomorphism groups To every pair A, B of abelian groups one can assign the abelian group Hom(A, B) consisting of all group homomorphisms from A to B. This is a functor which is contravariant in the first and covariant in the second argument, i.e. it is a functor (where Ab denotes the category of abelian groups with group homomorphisms). If and are morphisms in Ab, then the group homomorphism : is given by . See Hom functor. Representable functors We can generalize the previous example to any category C. To every pair X, Y of objects in C one can assign the set of morphisms from X to Y. This defines a functor to Set which is contravariant in the first argument and covariant in the second, i.e. it is a functor . If and are morphisms in C, then the map is given by . Functors like these are called representable functors. An important goal in many settings is to determine whether a given functor is representable. Relation to other categorical concepts Let C and D be categories. The collection of all functors from C to D forms the objects of a category: the functor category. Morphisms in this category are natural transformations between functors. Functors are often defined by universal properties; examples are the tensor product, the direct sum and direct product of groups or vector spaces, construction of free groups and modules, direct and inverse limits. The concepts of limit and colimit generalize several of the above. Universal constructions often give rise to pairs of adjoint functors. Computer implementations Functors sometimes appear in functional programming. For instance, the programming language Haskell has a class Functor where fmap is a polytypic function used to map functions (morphisms on Hask, the category of Haskell types) between existing types to functions between some new types. See also Anafunctor Profunctor Functor category Kan extension Pseudofunctor Notes References . External links see and the variations discussed and linked to there. André Joyal, CatLab, a wiki project dedicated to the exposition of categorical mathematics J. Adamek, H. Herrlich, G. Stecker, Abstract and Concrete Categories-The Joy of Cats Stanford Encyclopedia of Philosophy: "Category Theory" — by Jean-Pierre Marquis. Extensive bibliography. List of academic conferences on category theory Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher order categories. WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties. The catsters, a YouTube channel about category theory. Video archive of recorded talks relevant to categories, logic and the foundations of physics. Interactive Web page which generates examples of categorical constructions in the category of finite sets.
Functor
[ "Mathematics" ]
3,160
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Functors", "Category theory" ]
11,001
https://en.wikipedia.org/wiki/Fred%20Hoyle
Sir Fred Hoyle (24 June 1915 – 20 August 2001) was an English astronomer who formulated the theory of stellar nucleosynthesis and was one of the authors of the influential B2FH paper. He also held controversial stances on other scientific matters—in particular his rejection of the "Big Bang" theory (a term coined by him on BBC Radio) in favor of the "steady-state model", and his promotion of panspermia as the origin of life on Earth. He spent most of his working life at St John's College, Cambridge and served as the founding director of the Institute of Theoretical Astronomy at Cambridge. Hoyle also wrote science fiction novels, short stories and radio plays, co-created television serials, and co-authored twelve books with his son, Geoffrey Hoyle. Biography Early life Hoyle was born near Bingley in Gilstead, West Riding of Yorkshire, England. His father Ben Hoyle was a violinist and worked in the wool trade in Bradford, and served as a machine gunner in the First World War. His mother, Mabel Pickard, had studied music at the Royal College of Music in London and later worked as a cinema pianist. Hoyle was educated at Bingley Grammar School and read mathematics at Emmanuel College, Cambridge. As a youth, he sang in the choir at the local Anglican church. In 1936, Hoyle shared the Mayhew Prize with George Stanley Rushbrooke. Career In late 1940, Hoyle left Cambridge to go to Portsmouth to work for the Admiralty on radar research, for example devising a method to get the altitude of incoming aeroplanes. He was also put in charge of countermeasures against the radar-guided guns found on the Graf Spee after its scuttling in the River Plate. Britain's radar project was a large-scale operation, and was probably the inspiration for the large British project in Hoyle's novel The Black Cloud. Two colleagues in this war work were Hermann Bondi and Thomas Gold, and the three had many discussions on cosmology. The radar work involved several trips to North America, where he took the opportunity to visit astronomers. On one trip to the US, he learned about supernovae at Caltech and Mount Palomar and, in Canada, the nuclear physics of plutonium implosion and explosion, noticed some similarity between the two and started thinking about supernova nucleosynthesis. He had an intuition at the time "I will make a name for myself if this works out" (he published his prescient and groundbreaking paper in 1954). He also formed a group at Cambridge exploring stellar nucleosynthesis in ordinary stars and was bothered by the paucity of stellar carbon production in existing models. He noticed that one existing process would be made a billion times more productive if the carbon-12 nucleus had a resonance at 7.7 MeV, but nuclear physicists at the time omitted such an observed value. On another trip, he visited the nuclear physics group at Caltech, spent a few months of sabbatical there and persuaded them against their scepticism to find the Hoyle state in carbon-12, from which a full theory of stellar nucleosynthesis was developed, co-authored by Hoyle and members of the Caltech group. In 1945, after the war ended, Hoyle returned to Cambridge University as a lecturer at St John's College, Cambridge (where he had been a Fellow since 1939). Hoyle's Cambridge years, 1945–1973, saw him rise to the top of world astrophysics theory, on the basis of a startling originality of ideas covering a wide range of topics. In 1958, Hoyle was appointed Plumian Professor of Astronomy and Experimental Philosophy in Cambridge University. In 1967, he became the founding director of the Institute of Theoretical Astronomy (subsequently renamed the Institute of Astronomy, Cambridge), where his innovative leadership quickly led to this institution becoming one of the premier groups in the world for theoretical astrophysics. In 1971, he was invited to deliver the MacMillan Memorial Lecture to the Institution of Engineers and Shipbuilders in Scotland. He chose the subject "Astronomical Instruments and their Construction". Hoyle was knighted in 1972. Although the occupant of two distinguished offices, by 1972 Hoyle had become unhappy with his life in Cambridge. A dispute over election to a professorial chair led to Hoyle resigning as Plumian professor in 1972. The following year he also resigned the directorship of the institute. Explaining his actions, he later wrote: "I do not see any sense in continuing to skirmish on a battlefield where I can never hope to win. The Cambridge system is effectively designed to prevent one ever establishing a directed policy - key decisions can be upset by ill-informed and politically motivated committees. To be effective in this system one must for ever be watching one's colleagues, almost like a Robespierre spy system. If one does so, then of course little time is left for any real science." After leaving Cambridge, Hoyle wrote several popular science and science fiction books, as well as presenting lectures around the world, partly to provide a means of support. Hoyle was still a member of the joint policy committee (since 1967), during the planning stage for the 150-inch Anglo-Australian Telescope at Siding Spring Observatory in New South Wales. He became chairman of the Anglo-Australian Telescope board in 1973, and presided at its inauguration in 1974 by Charles, Prince of Wales. Decline and death After his resignation from Cambridge, Hoyle moved to the Lake District and occupied his time with treks across the moors, writing books, visiting research centres around the world, and working on science ideas (that have been largely rejected). On 24 November 1997, while hiking across moorlands in west Yorkshire, near his childhood home in Gilstead, Hoyle fell into a steep ravine called Shipley Glen. He was located about 12 hours later by a party using search dogs. He was hospitalised for two months with a broken shoulder bone, and pneumonia and kidney problems, both resulting from hypothermia. Thereafter he entered a marked decline, suffering from memory and mental agility problems. In 2001, he suffered a series of strokes and died in Bournemouth on 20 August of that year. Views and contributions Origin of nucleosynthesis Hoyle authored the first two research papers ever published on synthesis of chemical elements heavier than helium by stellar nuclear reactions. The first of these in 1946 showed that cores of stars will evolve to temperatures of billions of degrees, much hotter than temperatures considered for thermonuclear origin of stellar power in main-sequence stars. Hoyle showed that at such high temperatures the element iron can become much more abundant than other heavy elements owing to thermal equilibrium among nuclear particles, explaining the high natural abundance of iron. This idea would later be called the eProcess. Hoyle's second foundational nucleosynthesis publication, published in 1954, showed that the elements between carbon and iron cannot be synthesized by such equilibrium processes. He attributed those elements to specific nuclear fusion reactions between abundant constituents in concentric shells of evolved massive, pre-supernova stars. This startlingly modern picture is the accepted paradigm today for the supernova nucleosynthesis of these primary elements. In the mid-1950s, Hoyle became the leader of a group of talented experimental and theoretical physicists who met in Cambridge: William Alfred Fowler, Margaret Burbidge, and Geoffrey Burbidge. This group systematized basic ideas of how all the chemical elements in our universe were created, with this now being a field called nucleosynthesis. Famously, in 1957, this group produced the B2FH paper (known for the initials of the four authors) in which the field of nucleosynthesis was organized into complementary nuclear processes. They added much new material on the synthesis of heavy elements by neutron-capture reactions, the so-called s process and the r process. So influential did the B2FH paper become that for the remainder of the twentieth century it became the default citation of almost all researchers wishing to cite an accepted origin for nucleosynthesis theory, and as a result, the path-breaking Hoyle 1954 paper fell into obscurity. Historical research in the 21st century has brought Hoyle's 1954 paper back to scientific prominence. Those historical arguments were first presented to a gathering of nucleosynthesis experts attending a 2007 conference at Caltech organized after the deaths of both Fowler and Hoyle to celebrate the 50th anniversary of the publication of B2FH. Ironically the B2FH paper did not review Hoyle's 1954 supernova-shells attribution of the origin of elements between silicon and iron despite Hoyle's co-authorship of B2FH. Based on his many personal discussions with Hoyle Donald D. Clayton has attributed this seemingly inexplicable oversight in B2FH to the lack of proofreading by Hoyle of the draft composed at Caltech in 1956 by G. R. Burbidge and E. M. Burbidge. The second of Hoyle's nucleosynthesis papers also introduced an interesting use of the anthropic principle, which was not then known by that name. In trying to work out the steps of stellar nucleosynthesis, Hoyle calculated that one particular nuclear reaction, the triple-alpha process, which generates carbon from helium, would require the carbon nucleus to have a very specific resonance energy and spin for it to work. The large amount of carbon in the universe, which makes it possible for carbon-based life-forms of any kind to exist, demonstrated to Hoyle that this nuclear reaction must work. Based on this notion, Hoyle therefore predicted the values of the energy, the nuclear spin and the parity of the compound state in the carbon nucleus formed by three alpha particles (helium nuclei), which was later borne out by experiment. This energy level, while needed to produce carbon in large quantities, was statistically very unlikely to fall where it does in the scheme of carbon energy levels. Hoyle later wrote: His co-worker William Alfred Fowler eventually won the Nobel Prize for Physics in 1983 (with Subrahmanyan Chandrasekhar), but Hoyle's original contribution was overlooked by the electors, and many were surprised that such a notable astronomer missed out. Fowler himself in an autobiographical sketch affirmed Hoyle's pioneering efforts: Rejection of the Big Bang While having no argument with the Lemaître theory (later confirmed by Edwin Hubble's observations) that the universe was expanding, Hoyle disagreed on its interpretation. He found the idea that the universe had a beginning to be pseudoscience, resembling arguments for a creator, "for it's an irrational process, and can't be described in scientific terms" (see Kalam cosmological argument). Instead, Hoyle, along with Thomas Gold and Hermann Bondi (with whom he had worked on radar in the Second World War), in 1948 began to argue for the universe as being in a "steady state" and formulated their Steady State theory. The theory tried to explain how the universe could be eternal and essentially unchanging while still having the galaxies we observe moving away from each other. The theory hinged on the creation of matter between galaxies over time, so that even though galaxies get further apart, new ones that develop between them fill the space they leave. The resulting universe is in a "steady state" in the same manner that a flowing river is—the individual water molecules are moving away but the overall river remains the same. The theory was one alternative to the Big Bang which, like the Big Bang, agreed with key observations of the day, namely Hubble's red shift observations, and Hoyle was a strong critic of the Big Bang. He coined the term "Big Bang" on BBC radio's Third Programme broadcast on 28 March 1949. It was said by George Gamow and his opponents that Hoyle intended to be pejorative, and the script from which he read aloud was interpreted by his opponents to be "vain, one-sided, insulting, not worthy of the BBC". Hoyle explicitly denied that he was being insulting and said it was just a striking image meant to emphasize the difference between the two theories for the radio audience. In another BBC interview, he said, "The reason why scientists like the "Big Bang" is because they are overshadowed by the Book of Genesis. It is deep within the psyche of most scientists to believe in the first page of Genesis". Hoyle had a famously heated argument with Martin Ryle of the Cavendish Radio Astronomy Group about Hoyle's steady state theory, which somewhat restricted collaboration between the Cavendish group and the Cambridge Institute of Astronomy during the 1960s. Hoyle, unlike Gold and Bondi, offered an explanation for the appearance of new matter by postulating the existence of what he dubbed the "creation field", or just the "C-field", which had negative pressure in order to be consistent with the conservation of energy and drive the expansion of the universe. This C-field is the same as the later "de Sitter solution" for cosmic inflation, but the C-field model acts much slower than the de Sitter inflation model. They jointly argued that continuous creation was no more inexplicable than the appearance of the entire universe from nothing, although it had to be done on a regular basis. In the end, mounting observational evidence convinced most cosmologists that the steady-state model was incorrect and that the Big Bang theory agreed better with observations, although Hoyle continued to support and develop his theory. In 1993, in an attempt to explain some of the evidence against the steady-state theory, he presented a modified version called "quasi-steady state cosmology" (QSS), but the theory is not widely accepted. The evidence that resulted in the Big Bang's victory over the steady-state model included discovery of cosmic microwave background radiation in the 1960s, and the distribution of "young galaxies" and quasars throughout the Universe in the 1980s indicate a more consistent age estimate of the universe. Hoyle died in 2001 having never accepted the validity of the Big Bang theory. Theory of gravity Together with Narlikar, Hoyle developed a particle theory in the 1960s, the Hoyle–Narlikar theory of gravity. It made predictions that were roughly the same as Einstein's general relativity, but it incorporated Mach's Principle, which Einstein had tried but failed to incorporate in his theory. The Hoyle-Narlikar theory fails several tests, including consistency with the microwave background. It was motivated by their belief in the steady-state model of the universe. Rejection of Earth-based abiogenesis In his later years, Hoyle became a staunch critic of theories of abiogenesis to explain the origin of life on Earth. With Chandra Wickramasinghe, Hoyle promoted the hypothesis that the first life on Earth began in space, spreading through the universe via panspermia, and that evolution on Earth is influenced by a steady influx of viruses arriving via comets. His belief that comets had a significant percentage of organic compounds was well ahead of his time, as the dominant views in the 1970s and 1980s were that comets largely consisted of water-ice, and the presence of organic compounds was then highly controversial. Wickramasinghe wrote in 2003: "In the highly polarized polemic between Darwinism and creationism, our position is unique. Although we do not align ourselves with either side, both sides treat us as opponents. Thus we are outsiders with an unusual perspective—and our suggestion for a way out of the crisis has not yet been considered." Hoyle and Wickramasinghe advanced several instances where they say outbreaks of illnesses on Earth are of extraterrestrial origins, including the 1918 flu pandemic, and certain outbreaks of polio and mad cow disease. For the 1918 flu pandemic, they hypothesized that cometary dust brought the virus to Earth simultaneously at multiple locations—a view almost universally dismissed by experts on this pandemic. In 1982, Hoyle presented Evolution from Space for the Royal Institution's Omni Lecture. After considering what he thought of as a very remote possibility of Earth-based abiogenesis he concluded: Published in his 1982/1984 books Evolution from Space (co-authored with Chandra Wickramasinghe), Hoyle calculated that the chance of obtaining the required set of enzymes for even the simplest living cell without panspermia was one in 1040,000. Since the number of atoms in the known universe is infinitesimally tiny by comparison (1080), he argued that Earth as life's place of origin could be ruled out. He claimed: Though Hoyle declared himself an atheist, this apparent suggestion of a guiding hand led him to the conclusion that "a superintellect has monkeyed with physics, as well as with chemistry and biology, and ... there are no blind forces worth speaking about in nature." He would go on to compare the random emergence of even the simplest cell without panspermia to the likelihood that "a tornado sweeping through a junk-yard might assemble a Boeing 747 from the materials therein" and to compare the chance of obtaining even a single functioning protein by chance combination of amino acids to a solar system full of blind men solving Rubik's Cubes simultaneously. This is known as "the junkyard tornado", or "Hoyle's Fallacy". Those who advocate the intelligent design (ID) philosophy sometimes cite Hoyle's work in this area to support the claim that the universe was fine tuned to allow intelligent life to be possible. Other opinions While Hoyle was well-regarded for his works on nucleosynthesis and science popularization, he held positions on a wide range of scientific issues that were in direct opposition to the prevailing theories of the scientific community. Paul Davies describes how he "loved his maverick personality and contempt for orthodoxy", quoting Hoyle as saying "I don't care what they think" about his theories on discrepant redshift, and "it is better to be interesting and wrong than boring and right". Hoyle often expressed anger against the labyrinthine and petty politics at Cambridge and frequently feuded with members and institutions of all levels of the British astronomy community, leading to his resignation from Cambridge in September 1971 over the way he thought Donald Lynden-Bell was chosen to replace retiring professor Roderick Oliver Redman behind his back. According to biographer Simon Mitton, Hoyle was crestfallen because he felt that his colleagues at Cambridge were unsupportive. In addition to his views on steady state theory and panspermia, Hoyle also supported the following controversial hypotheses and speculations: The correlation of flu epidemics with the sunspot cycle, with epidemics occurring at the minimum of the cycle. The idea was that flu contagion was scattered in the interstellar medium and reached Earth only when the solar wind had minimum power. Two fossil Archaeopteryx were man-made fakes. The theory of abiogenic petroleum, held by Hoyle and by Thomas Gold, where natural hydrocarbons (oil and natural gas) are explained as the result of deep carbon deposits, instead of fossilized organic material. This theory is dismissed by the mainstream petroleum geochemistry community. In his 1977 book On Stonehenge, Hoyle supported Gerald Hawkins's proposal that the fifty-six Aubrey holes at Stonehenge were used as a system for neolithic Britons to predict eclipses, using them in the daily positioning of marker stones. Using the Aubrey holes for predicting lunar eclipses was originally proposed by Gerald Hawkins in his book of the subject Stonehenge Decoded (1965). Nobel Prize for Physics Hoyle was also at the centre of two unrelated controversies involving the politics for selecting recipients of the Nobel Prize for Physics. The first arose when the 1974 prize went in part to Antony Hewish for his leading role in the discovery of pulsars. Hoyle made an off-the-cuff remark to a reporter in Montreal that "Yes, Jocelyn Bell was the actual discoverer, not Hewish, who was her supervisor, so she should have been included." This remark received widespread international coverage. Worried about being misunderstood, Hoyle carefully composed a letter of explanation to The Times. The 1983 prize went in part to William Alfred Fowler "for his theoretical and experimental studies of the nuclear reactions of importance in the formation of the chemical elements in the universe" despite Hoyle having been the inventor of the theory of nucleosynthesis in the stars with two research papers published shortly after WWII. So some suspicion arose that Hoyle was denied the third share of this prize because of his earlier public disagreement with the 1974 award. British scientist Harry Kroto later said that the Nobel Prize is not just an award for a piece of work, but a recognition of a scientist's overall reputation and Hoyle's championing many disreputable and disproven ideas may have invalidated him. In his obituary, Nature editor and fellow Briton John Maddox called it "shameful" that Fowler had been rewarded with a Nobel prize and Hoyle had not. Media appearances Hoyle appeared in a series of radio talks on astronomy for the BBC in the 1950s; these were collected in the book The Nature of the Universe, and he went on to write a number of other popular science books. In the play Sur la route de Montalcino, the character of Fred Hoyle confronts Georges Lemaître on a fictional journey to the Vatican in 1957. Hoyle appeared in the 1973 short film Take the World From Another Point of View. In the 2004 television movie Hawking, Fred Hoyle is played by Peter Firth. In the movie, Stephen Hawking (played by Benedict Cumberbatch) publicly confronts Hoyle at a Royal Society lecture in summer 1964, about a mistake he found in his latest publication. Honours Awards Elected a member of the American Academy of Arts and Sciences (1964) Elected a Fellow of the Royal Society (FRS) in 1957 Gold Medal of the Royal Astronomical Society (1968) Bakerian Lecture (1968) Elected member of the United States National Academy of Sciences (1969) Bruce Medal (1970) Henry Norris Russell Lectureship (1971) Jansky Lectureship before the National Radio Astronomy Observatory Knighthood (1972) President of the Royal Astronomical Society (1971–1973) Honorary Fellow of St John's College, Cambridge (1973–2001) Royal Medal (1974) Klumpke-Roberts Award of the Astronomical Society of the Pacific (1977) Elected member of the American Philosophical Society (1980) Balzan Prize for Astrophysics: evolution of stars (1994, with Martin Schwarzschild) Crafoord Prize from the Royal Swedish Academy of Sciences, with Edwin Salpeter (1997) Named after him Hoyle Building, Institute of Astronomy, Cambridge Asteroid 8077 Hoyle Janibacter hoylei, species of bacteria discovered by ISRO scientists Sir Fred Hoyle Way, a stretch of the A650 dual carriageway in Bingley. Institute of Physics Fred Hoyle Medal and Prize Memorabilia The Fred Hoyle Collection at St John's College Library contains "a pair of walking boots, five boxes of photographs, two ice axes, some dental X-rays, a telescope, ten large film reels and an unpublished opera" in addition to 150 document boxes of papers. Bibliography Non-fiction The Nature of the Universe – a series of broadcast lectures, Basil Blackwell, Oxford 1950 (early use of the Big Bang phrase) Frontiers of Astronomy, Heinemann Education Books Ltd, London, 1955. The Internet Archive. HarperCollins, Burbidge, E. M., Burbidge, G. R., Fowler, W. A. and Hoyle, F., "Synthesis of the Elements in Stars" , Revs. Mod. Physics 29:547–650, 1957, the famous B2FH paper after their initials, for which Hoyle is most famous among professional cosmologists. Astronomy, A history of man's investigation of the universe, Crescent Books, Inc., London 1962, Of men and galaxies, Seattle University of Washington, 1964, Galaxies, Nuclei, and Quasars, Harper & Row, Publishers, New York, 1965, Nicolaus Copernicus, Heinemann Educational Books Ltd., London, p. 78, 1973 Astronomy and Cosmology: A Modern Course, 1975, Energy or Extinction? The case for nuclear energy, 1977, Heinemann Educational Books Limited, . In this provocative book Hoyle establishes the dependence of Western civilization on energy consumption and predicts that nuclear fission as a source of energy is essential for its survival. Ten Faces of the Universe, 1977, W.H. Freeman and Company (San Francisco), On Stonehenge, 1977, London : Heinemann Educational, ; San Francisco: W.H. Freeman and Company, pbk. Lifecloud – The Origin of Life in the Universe, Hoyle, F. and Wickramasinghe C., J.M. Dent & Sons, 1978. Diseases from Space (with Chandra Wickramasinghe) (J.M. Dent, London, 1979) Commonsense in Nuclear Energy, Fred Hoyle and Geoffrey Hoyle, 1980, Heinemann Educational Books Ltd., The big bang in astronomy, New Scientist 92(1280):527, 19 November 1981. Ice, the Ultimate Human Catastrophe,1981, Snippet view from Google Books The Intelligent Universe, 1983 From Grains to Bacteria, Hoyle, F. and Wickramasinghe N. C., University College Cardiff Press, , 1984 Evolution from space (the Omni lecture) and other papers on the origin of life 1982, Evolution from Space: A Theory of Cosmic Creationism, 1984, Viruses from Space, 1986, With Jayant Narlikar and Chandra Wickramasinghe, The extragalactic universe: an alternative view, Nature 346:807–812, 30 August 1990. The Origin of the Universe and the Origin of Religion,1993, Home Is Where the Wind Blows: Chapters from a Cosmologist's Life (autobiography) Oxford University Press 1994, Mathematics of Evolution, (1987) University College Cardiff Press, (1999) Acorn Enterprises LLC., With G. Burbridge and Narlikar J. V. A Different Approach to Cosmology, Cambridge University Press 2000, Science fiction Hoyle also wrote science fiction. In his first novel, The Black Cloud, most intelligent life in the universe takes the form of interstellar gas clouds; they are surprised to learn that intelligent life can also form on planets. He wrote a television series, A for Andromeda, which was also published as a novel. His play Rockets in Ursa Major had a professional production at the Mermaid Theatre in 1962. The Black Cloud, 1957 Ossian's Ride, 1959 A for Andromeda, 1962 (co-authored with John Elliot) Fifth Planet, 1963 (co-authored with Geoffrey Hoyle) Andromeda Breakthrough, 1965 (co-authored with John Elliot) October the First Is Too Late, 1966 Element 79 (collection of short stories), 1967 Rockets in Ursa Major, 1969 (co-authored with Geoffrey Hoyle) Seven Steps to the Sun, 1970 (co-authored with Geoffrey Hoyle) The Inferno, 1973 (co-authored with Geoffrey Hoyle) The Molecule Men and the Monster of Loch Ness, 1973 (co-authored with Geoffrey Hoyle) Into Deepest Space, 1974 (co-authored with Geoffrey Hoyle) The Incandescent Ones, 1977 (co-authored with Geoffrey Hoyle) The Westminster Disaster, 1978 (co-authored with Geoffrey Hoyle and Edited by Barbara Hoyle) The Frozen Planet of Azuron, 1982 (Ladybird Books, co-authored with Geoffrey Hoyle) The Energy Pirate, 1982 (Ladybird Books, co-authored with Geoffrey Hoyle) The Planet of Death, 1982 (Ladybird Books, co-authored with Geoffrey Hoyle) The Giants of Universal Park, 1982 (Ladybird Books, co-authored with Geoffrey Hoyle) Comet Halley, 1985 Most of these are independent of each other. Andromeda Breakthrough is a sequel to A for Andromeda and Into Deepest Space is a sequel to Rockets in Ursa Major. The four Ladybird Books are intended for children. Some stories of the collection Element 79 are fantasy, in particular "Welcome to Slippage City" and "The Judgement of Aphrodite". Both introduce mythological characters. The Telegraph (UK) called him a "masterful" science fiction writer. References Further reading Alan P. Lightman and Roberta Brawer, Origins: The Lives and Worlds of Modern Cosmologists, Harvard University Press, 1990. A collection of interviews, mostly with the generation (or two) of cosmologists after Hoyle, but also including an interview with Hoyle himself. Several interviewees testify to Hoyle's influence in popularizing astronomy and cosmology. Dennis Overbye, Lonely Hearts of the Cosmos: The Scientific Quest for the Secret of the Universe, HarperCollins, 1991. 2nd ed. (with new afterword), Back Bay, 1999. Gives a biographical account of modern cosmology in a novel-like fashion. Complementary to Origins. Simon Mitton, Fred Hoyle: A Life in Science, Cambridge University Press, 2011. Douglas Gough, editor, The Scientific Legacy of Fred Hoyle, Cambridge University Press, 2005. Chandra Wickramasinghe, A Journey with Fred Hoyle, World Scientific Pub, 2005. . Jane Gregory, Fred Hoyle's Universe, Oxford University Press, 2005. A Journey with Frey Hoyle: Second Edition by Chandra Wickramasinghe, World Scientific Publishing Co. 2013. External links Fred Hoyle Website Fred Hoyle and Chandra Wickramasinghe Website Obituary by Sir Martin Rees in Physics Today Obituary by Bernard Lovell in The Guardian Fred Hoyle: An Online Exhibition An Interview with Fred Hoyle, 5 July 1996 Fred Hoyle at the Notable Names Database 1915 births 2001 deaths 20th-century atheists 20th-century English astronomers 20th-century English novelists 20th-century English male writers Alumni of Emmanuel College, Cambridge British atheists British cosmologists English agnostics English male novelists English science fiction writers Fellows of St John's College, Cambridge Fellows of the Royal Society Foreign associates of the National Academy of Sciences Kalinga Prize recipients Knights Bachelor Members of the American Philosophical Society Panspermia People educated at Bingley Grammar School People from Bingley Plumian Professors of Astronomy and Experimental Philosophy Presidents of the Royal Astronomical Society Recipients of the Gold Medal of the Royal Astronomical Society Royal Medal winners Television show creators
Fred Hoyle
[ "Biology" ]
6,356
[ "Biological hypotheses", "Origin of life", "Panspermia" ]
11,004
https://en.wikipedia.org/wiki/Fundamental%20group
In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, of the topological space. The fundamental group is the first and simplest homotopy group. The fundamental group is a homotopy invariant—topological spaces that are homotopy equivalent (or the stronger case of homeomorphic) have isomorphic fundamental groups. The fundamental group of a topological space is denoted by . Intuition Start with a space (for example, a surface), and some point in it, and all the loops both starting and ending at this point—paths that start at this point, wander around and eventually return to the starting point. Two loops can be combined in an obvious way: travel along the first loop, then along the second. Two loops are considered equivalent if one can be deformed into the other without breaking. The set of all such loops with this method of combining and this equivalence between them is the fundamental group for that particular space. History Henri Poincaré defined the fundamental group in 1895 in his paper "Analysis situs". The concept emerged in the theory of Riemann surfaces, in the work of Bernhard Riemann, Poincaré, and Felix Klein. It describes the monodromy properties of complex-valued functions, as well as providing a complete topological classification of closed surfaces. Definition Throughout this article, X is a topological space. A typical example is a surface such as the one depicted at the right. Moreover, is a point in X called the base-point. (As is explained below, its role is rather auxiliary.) The idea of the definition of the homotopy group is to measure how many (broadly speaking) curves on X can be deformed into each other. The precise definition depends on the notion of the homotopy of loops, which is explained first. Homotopy of loops Given a topological space X, a loop based at is defined to be a continuous function (also known as a continuous map) such that the starting point and the end point are both equal to . A homotopy is a continuous interpolation between two loops. More precisely, a homotopy between two loops (based at the same point ) is a continuous map such that for all that is, the starting point of the homotopy is for all t (which is often thought of as a time parameter). for all that is, similarly the end point stays at for all t. for all . If such a homotopy h exists, and are said to be homotopic. The relation " is homotopic to " is an equivalence relation so that the set of equivalence classes can be considered: . This set (with the group structure described below) is called the fundamental group of the topological space X at the base point . The purpose of considering the equivalence classes of loops up to homotopy, as opposed to the set of all loops (the so-called loop space of X) is that the latter, while being useful for various purposes, is a rather big and unwieldy object. By contrast the above quotient is, in many cases, more manageable and computable. Group structure By the above definition, is just a set. It becomes a group (and therefore deserves the name fundamental group) using the concatenation of loops. More precisely, given two loops , their product is defined as the loop Thus the loop first follows the loop with "twice the speed" and then follows with "twice the speed". The product of two homotopy classes of loops and is then defined as . It can be shown that this product does not depend on the choice of representatives and therefore gives a well-defined operation on the set . This operation turns into a group. Its neutral element is the constant loop, which stays at for all times t. The inverse of a (homotopy class of a) loop is the same loop, but traversed in the opposite direction. More formally, . Given three based loops the product is the concatenation of these loops, traversing and then with quadruple speed, and then with double speed. By comparison, traverses the same paths (in the same order), but with double speed, and with quadruple speed. Thus, because of the differing speeds, the two paths are not identical. The associativity axiom therefore crucially depends on the fact that paths are considered up to homotopy. Indeed, both above composites are homotopic, for example, to the loop that traverses all three loops with triple speed. The set of based loops up to homotopy, equipped with the above operation therefore does turn into a group. Dependence of the base point Although the fundamental group in general depends on the choice of base point, it turns out that, up to isomorphism (actually, even up to inner isomorphism), this choice makes no difference as long as the space X is path-connected. For path-connected spaces, therefore, many authors write instead of Concrete examples This section lists some basic examples of fundamental groups. To begin with, in Euclidean space () or any convex subset of there is only one homotopy class of loops, and the fundamental group is therefore the trivial group with one element. More generally, any star domain – and yet more generally, any contractible space – has a trivial fundamental group. Thus, the fundamental group does not distinguish between such spaces. The 2-sphere A path-connected space whose fundamental group is trivial is called simply connected. For example, the 2-sphere depicted on the right, and also all the higher-dimensional spheres, are simply-connected. The figure illustrates a homotopy contracting one particular loop to the constant loop. This idea can be adapted to all loops such that there is a point that is in the image of However, since there are loops such that (constructed from the Peano curve, for example), a complete proof requires more careful analysis with tools from algebraic topology, such as the Seifert–van Kampen theorem or the cellular approximation theorem. The circle The circle (also known as the 1-sphere) is not simply connected. Instead, each homotopy class consists of all loops that wind around the circle a given number of times (which can be positive or negative, depending on the direction of winding). The product of a loop that winds around m times and another that winds around n times is a loop that winds around m + n times. Therefore, the fundamental group of the circle is isomorphic to the additive group of integers. This fact can be used to give proofs of the Brouwer fixed point theorem and the Borsuk–Ulam theorem in dimension 2. The figure eight The fundamental group of the figure eight is the free group on two letters. The idea to prove this is as follows: choosing the base point to be the point where the two circles meet (dotted in black in the picture at the right), any loop can be decomposed as where a and b are the two loops winding around each half of the figure as depicted, and the exponents are integers. Unlike the fundamental group of the figure eight is not abelian: the two ways of composing a and b are not homotopic to each other: More generally, the fundamental group of a bouquet of r circles is the free group on r letters. The fundamental group of a wedge sum of two path connected spaces X and Y can be computed as the free product of the individual fundamental groups: This generalizes the above observations since the figure eight is the wedge sum of two circles. The fundamental group of the plane punctured at n points is also the free group with n generators. The i-th generator is the class of the loop that goes around the i-th puncture without going around any other punctures. Graphs The fundamental group can be defined for discrete structures too. In particular, consider a connected graph , with a designated vertex v0 in V. The loops in G are the cycles that start and end at v0. Let T be a spanning tree of G. Every simple loop in G contains exactly one edge in E \ T; every loop in G is a concatenation of such simple loops. Therefore, the fundamental group of a graph is a free group, in which the number of generators is exactly the number of edges in E \ T. This number equals . For example, suppose G has 16 vertices arranged in 4 rows of 4 vertices each, with edges connecting vertices that are adjacent horizontally or vertically. Then G has 24 edges overall, and the number of edges in each spanning tree is , so the fundamental group of G is the free group with 9 generators. Note that G has 9 "holes", similarly to a bouquet of 9 circles, which has the same fundamental group. Knot groups Knot groups are by definition the fundamental group of the complement of a knot embedded in For example, the knot group of the trefoil knot is known to be the braid group which gives another example of a non-abelian fundamental group. The Wirtinger presentation explicitly describes knot groups in terms of generators and relations based on a diagram of the knot. Therefore, knot groups have some usage in knot theory to distinguish between knots: if is not isomorphic to some other knot group of another knot , then can not be transformed into . Thus the trefoil knot can not be continuously transformed into the circle (also known as the unknot), since the latter has knot group . There are, however, knots that can not be deformed into each other, but have isomorphic knot groups. Oriented surfaces The fundamental group of a genus-n orientable surface can be computed in terms of generators and relations as This includes the torus, being the case of genus 1, whose fundamental group is Topological groups The fundamental group of a topological group X (with respect to the base point being the neutral element) is always commutative. In particular, the fundamental group of a Lie group is commutative. In fact, the group structure on X endows with another group structure: given two loops and in X, another loop can defined by using the group multiplication in X: This binary operation on the set of all loops is a priori independent from the one described above. However, the Eckmann–Hilton argument shows that it does in fact agree with the above concatenation of loops, and moreover that the resulting group structure is abelian. An inspection of the proof shows that, more generally, is abelian for any H-space X, i.e., the multiplication need not have an inverse, nor does it have to be associative. For example, this shows that the fundamental group of a loop space of another topological space Y, is abelian. Related ideas lead to Heinz Hopf's computation of the cohomology of a Lie group. Functoriality If is a continuous map, and with then every loop in with base point can be composed with to yield a loop in with base point This operation is compatible with the homotopy equivalence relation and with composition of loops. The resulting group homomorphism, called the induced homomorphism, is written as or, more commonly, This mapping from continuous maps to group homomorphisms is compatible with composition of maps and identity morphisms. In the parlance of category theory, the formation of associating to a topological space its fundamental group is therefore a functor from the category of topological spaces together with a base point to the category of groups. It turns out that this functor does not distinguish maps that are homotopic relative to the base point: if are continuous maps with , and f and g are homotopic relative to {x0}, then f∗ = g∗. As a consequence, two homotopy equivalent path-connected spaces have isomorphic fundamental groups: For example, the inclusion of the circle in the punctured plane is a homotopy equivalence and therefore yields an isomorphism of their fundamental groups. The fundamental group functor takes products to products and coproducts to coproducts. That is, if X and Y are path connected, then and if they are also locally contractible, then (In the latter formula, denotes the wedge sum of pointed topological spaces, and the free product of groups.) The latter formula is a special case of the Seifert–van Kampen theorem, which states that the fundamental group functor takes pushouts along inclusions to pushouts. Abstract results As was mentioned above, computing the fundamental group of even relatively simple topological spaces tends to be not entirely trivial, but requires some methods of algebraic topology. Relationship to first homology group The abelianization of the fundamental group can be identified with the first homology group of the space. A special case of the Hurewicz theorem asserts that the first singular homology group is, colloquially speaking, the closest approximation to the fundamental group by means of an abelian group. In more detail, mapping the homotopy class of each loop to the homology class of the loop gives a group homomorphism from the fundamental group of a topological space X to its first singular homology group This homomorphism is not in general an isomorphism since the fundamental group may be non-abelian, but the homology group is, by definition, always abelian. This difference is, however, the only one: if X is path-connected, this homomorphism is surjective and its kernel is the commutator subgroup of the fundamental group, so that is isomorphic to the abelianization of the fundamental group. Gluing topological spaces Generalizing the statement above, for a family of path connected spaces the fundamental group is the free product of the fundamental groups of the This fact is a special case of the Seifert–van Kampen theorem, which allows to compute, more generally, fundamental groups of spaces that are glued together from other spaces. For example, the 2-sphere can be obtained by gluing two copies of slightly overlapping half-spheres along a neighborhood of the equator. In this case the theorem yields is trivial, since the two half-spheres are contractible and therefore have trivial fundamental group. The fundamental groups of surfaces, as mentioned above, can also be computed using this theorem. In the parlance of category theory, the theorem can be concisely stated by saying that the fundamental group functor takes pushouts (in the category of topological spaces) along inclusions to pushouts (in the category of groups). Coverings Given a topological space B, a continuous map is called a covering or E is called a covering space of B if every point b in B admits an open neighborhood U such that there is a homeomorphism between the preimage of U and a disjoint union of copies of U (indexed by some set I), in such a way that is the standard projection map Universal covering A covering is called a universal covering if E is, in addition to the preceding condition, simply connected. It is universal in the sense that all other coverings can be constructed by suitably identifying points in E. Knowing a universal covering of a topological space X is helpful in understanding its fundamental group in several ways: first, identifies with the group of deck transformations, i.e., the group of homeomorphisms that commute with the map to X, i.e., Another relation to the fundamental group is that can be identified with the fiber For example, the map (or, equivalently, ) is a universal covering. The deck transformations are the maps for This is in line with the identification in particular this proves the above claim Any path connected, locally path connected and locally simply connected topological space X admits a universal covering. An abstract construction proceeds analogously to the fundamental group by taking pairs (x, γ), where x is a point in X and γ is a homotopy class of paths from x0 to x. The passage from a topological space to its universal covering can be used in understanding the geometry of X. For example, the uniformization theorem shows that any simply connected Riemann surface is (isomorphic to) either or the upper half plane. General Riemann surfaces then arise as quotients of group actions on these three surfaces. The quotient of a free action of a discrete group G on a simply connected space Y has fundamental group As an example, the real n-dimensional real projective space is obtained as the quotient of the n-dimensional unit sphere by the antipodal action of the group sending to As is simply connected for n ≥ 2, it is a universal cover of in these cases, which implies for n ≥ 2. Lie groups Let G be a connected, simply connected compact Lie group, for example, the special unitary group SU(n), and let Γ be a finite subgroup of G. Then the homogeneous space X = G/Γ has fundamental group Γ, which acts by right multiplication on the universal covering space G. Among the many variants of this construction, one of the most important is given by locally symmetric spaces X = Γ&hairsp;\G/K, where G is a non-compact simply connected, connected Lie group (often semisimple), K is a maximal compact subgroup of G Γ is a discrete countable torsion-free subgroup of G. In this case the fundamental group is Γ and the universal covering space G/K is actually contractible (by the Cartan decomposition for Lie groups). As an example take G = SL(2, R), K = SO(2) and Γ any torsion-free congruence subgroup of the modular group SL(2, Z). From the explicit realization, it also follows that the universal covering space of a path connected topological group H is again a path connected topological group G. Moreover, the covering map is a continuous open homomorphism of G onto H with kernel Γ, a closed discrete normal subgroup of G: Since G is a connected group with a continuous action by conjugation on a discrete group Γ, it must act trivially, so that Γ has to be a subgroup of the center of G. In particular π1(H) = Γ is an abelian group; this can also easily be seen directly without using covering spaces. The group G is called the universal covering group of H. As the universal covering group suggests, there is an analogy between the fundamental group of a topological group and the center of a group; this is elaborated at Lattice of covering groups. Fibrations Fibrations provide a very powerful means to compute homotopy groups. A fibration f the so-called total space, and the base space B has, in particular, the property that all its fibers are homotopy equivalent and therefore can not be distinguished using fundamental groups (and higher homotopy groups), provided that B is path-connected. Therefore, the space E can be regarded as a "twisted product" of the base space B and the fiber The great importance of fibrations to the computation of homotopy groups stems from a long exact sequence provided that B is path-connected. The term is the second homotopy group of B, which is defined to be the set of homotopy classes of maps from to B, in direct analogy with the definition of If E happens to be path-connected and simply connected, this sequence reduces to an isomorphism which generalizes the above fact about the universal covering (which amounts to the case where the fiber F is also discrete). If instead F happens to be connected and simply connected, it reduces to an isomorphism What is more, the sequence can be continued at the left with the higher homotopy groups of the three spaces, which gives some access to computing such groups in the same vein. Classical Lie groups Such fiber sequences can be used to inductively compute fundamental groups of compact classical Lie groups such as the special unitary group with This group acts transitively on the unit sphere inside The stabilizer of a point in the sphere is isomorphic to It then can be shown that this yields a fiber sequence Since the sphere has dimension at least 3, which implies The long exact sequence then shows an isomorphism Since is a single point, so that is trivial, this shows that is simply connected for all The fundamental group of noncompact Lie groups can be reduced to the compact case, since such a group is homotopic to its maximal compact subgroup. These methods give the following results: A second method of computing fundamental groups applies to all connected compact Lie groups and uses the machinery of the maximal torus and the associated root system. Specifically, let be a maximal torus in a connected compact Lie group and let be the Lie algebra of The exponential map is a fibration and therefore its kernel identifies with The map can be shown to be surjective with kernel given by the set I of integer linear combination of coroots. This leads to the computation This method shows, for example, that any connected compact Lie group for which the associated root system is of type is simply connected. Thus, there is (up to isomorphism) only one connected compact Lie group having Lie algebra of type ; this group is simply connected and has trivial center. Edge-path group of a simplicial complex When the topological space is homeomorphic to a simplicial complex, its fundamental group can be described explicitly in terms of generators and relations. If X is a connected simplicial complex, an edge-path in X is defined to be a chain of vertices connected by edges in X. Two edge-paths are said to be edge-equivalent if one can be obtained from the other by successively switching between an edge and the two opposite edges of a triangle in X. If v is a fixed vertex in X, an edge-loop at v is an edge-path starting and ending at v. The edge-path group E(X, v) is defined to be the set of edge-equivalence classes of edge-loops at v, with product and inverse defined by concatenation and reversal of edge-loops. The edge-path group is naturally isomorphic to π1(|X&hairsp;|, v), the fundamental group of the geometric realisation |X&hairsp;| of X. Since it depends only on the 2-skeleton X 2 of X (that is, the vertices, edges, and triangles of X), the groups π1(|X&hairsp;|,v) and π1(|X 2|, v) are isomorphic. The edge-path group can be described explicitly in terms of generators and relations. If T is a maximal spanning tree in the 1-skeleton of X, then E(X, v) is canonically isomorphic to the group with generators (the oriented edge-paths of X not occurring in T) and relations (the edge-equivalences corresponding to triangles in X). A similar result holds if T is replaced by any simply connected—in particular contractible—subcomplex of X. This often gives a practical way of computing fundamental groups and can be used to show that every finitely presented group arises as the fundamental group of a finite simplicial complex. It is also one of the classical methods used for topological surfaces, which are classified by their fundamental groups. The universal covering space of a finite connected simplicial complex X can also be described directly as a simplicial complex using edge-paths. Its vertices are pairs (w,γ) where w is a vertex of X and γ is an edge-equivalence class of paths from v to w. The k-simplices containing (w,γ) correspond naturally to the k-simplices containing w. Each new vertex u of the k-simplex gives an edge wu and hence, by concatenation, a new path γu from v to u. The points (w,γ) and (u, γu) are the vertices of the "transported" simplex in the universal covering space. The edge-path group acts naturally by concatenation, preserving the simplicial structure, and the quotient space is just X. It is well known that this method can also be used to compute the fundamental group of an arbitrary topological space. This was doubtless known to Eduard Čech and Jean Leray and explicitly appeared as a remark in a paper by André Weil; various other authors such as Lorenzo Calabi, Wu Wen-tsün, and Nodar Berikashvili have also published proofs. In the simplest case of a compact space X with a finite open covering in which all non-empty finite intersections of open sets in the covering are contractible, the fundamental group can be identified with the edge-path group of the simplicial complex corresponding to the nerve of the covering. Realizability Every group can be realized as the fundamental group of a connected CW-complex of dimension 2 (or higher). As noted above, though, only free groups can occur as fundamental groups of 1-dimensional CW-complexes (that is, graphs). Every finitely presented group can be realized as the fundamental group of a compact, connected, smooth manifold of dimension 4 (or higher). But there are severe restrictions on which groups occur as fundamental groups of low-dimensional manifolds. For example, no free abelian group of rank 4 or higher can be realized as the fundamental group of a manifold of dimension 3 or less. It can be proved that every group can be realized as the fundamental group of a compact Hausdorff space if and only if there is no measurable cardinal. Related concepts Higher homotopy groups Roughly speaking, the fundamental group detects the 1-dimensional hole structure of a space, but not higher-dimensional holes such as for the 2-sphere. Such "higher-dimensional holes" can be detected using the higher homotopy groups , which are defined to consist of homotopy classes of (basepoint-preserving) maps from to X. For example, the Hurewicz theorem implies that for all the n-th homotopy group of the n-sphere is As was mentioned in the above computation of of classical Lie groups, higher homotopy groups can be relevant even for computing fundamental groups. Loop space The set of based loops (as is, i.e. not taken up to homotopy) in a pointed space X, endowed with the compact open topology, is known as the loop space, denoted The fundamental group of X is in bijection with the set of path components of its loop space: Fundamental groupoid The fundamental groupoid is a variant of the fundamental group that is useful in situations where the choice of a base point is undesirable. It is defined by first considering the category of paths in i.e., continuous functions , where r is an arbitrary non-negative real number. Since the length r is variable in this approach, such paths can be concatenated as is (i.e., not up to homotopy) and therefore yield a category. Two such paths with the same endpoints and length r, resp. r''' are considered equivalent if there exist real numbers such that and are homotopic relative to their end points, where use a different definition by reparametrizing the paths to length 1. The category of paths up to this equivalence relation is denoted Each morphism in is an isomorphism, with inverse given by the same path traversed in the opposite direction. Such a category is called a groupoid. It reproduces the fundamental group since . More generally, one can consider the fundamental groupoid on a set A of base points, chosen according to the geometry of the situation; for example, in the case of the circle, which can be represented as the union of two connected open sets whose intersection has two components, one can choose one base point in each component. The van Kampen theorem admits a version for fundamental groupoids which gives, for example, another way to compute the fundamental group(oid) of Local systems Generally speaking, representations may serve to exhibit features of a group by its actions on other mathematical objects, often vector spaces. Representations of the fundamental group have a very geometric significance: any local system (i.e., a sheaf on X with the property that locally in a sufficiently small neighborhood U of any point on X, the restriction of F is a constant sheaf of the form ) gives rise to the so-called monodromy representation, a representation of the fundamental group on an n-dimensional -vector space. Conversely, any such representation on a path-connected space X arises in this manner. This equivalence of categories between representations of and local systems is used, for example, in the study of differential equations, such as the Knizhnik–Zamolodchikov equations. Étale fundamental group In algebraic geometry, the so-called étale fundamental group is used as a replacement for the fundamental group. Since the Zariski topology on an algebraic variety or scheme X is much coarser than, say, the topology of open subsets in it is no longer meaningful to consider continuous maps from an interval to X. Instead, the approach developed by Grothendieck consists in constructing by considering all finite étale covers of X. These serve as an algebro-geometric analogue of coverings with finite fibers. This yields a theory applicable in situations where no great generality classical topological intuition whatsoever is available, for example for varieties defined over a finite field. Also, the étale fundamental group of a field is its (absolute) Galois group. On the other hand, for smooth varieties X over the complex numbers, the étale fundamental group retains much of the information inherent in the classical fundamental group: the former is the profinite completion of the latter. Fundamental group of algebraic groups The fundamental group of a root system is defined in analogy to the computation for Lie groups. This allows to define and use the fundamental group of a semisimple linear algebraic group G, which is a useful basic tool in the classification of linear algebraic groups. Fundamental group of simplicial sets The homotopy relation between 1-simplices of a simplicial set X is an equivalence relation if X is a Kan complex but not necessarily so in general. Thus, of a Kan complex can be defined as the set of homotopy classes of 1-simplices. The fundamental group of an arbitrary simplicial set X are defined to be the homotopy group of its topological realization, i.e., the topological space obtained by gluing topological simplices as prescribed by the simplicial set structure of X. See also Orbifold fundamental group Fundamental group scheme Notes References Peter Hilton and Shaun Wylie, Homology Theory, Cambridge University Press (1967) [warning: these authors use contrahomology for cohomology] Deane Montgomery and Leo Zippin, Topological Transformation Groups'', Interscience Publishers (1955) External links Dylan G.L. Allegretti, Simplicial Sets and van Kampen's Theorem: A discussion of the fundamental groupoid of a topological space and the fundamental groupoid of a simplicial set Animations to introduce fundamental group by Nicolas Delanoue Sets of base points and fundamental groupoids: mathoverflow discussion Groupoids in Mathematics Algebraic topology Homotopy theory
Fundamental group
[ "Mathematics" ]
6,481
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
11,024
https://en.wikipedia.org/wiki/Formant
In speech science and phonetics, a formant is the broad spectral maximum that results from an acoustic resonance of the human vocal tract. In acoustics, a formant is usually defined as a broad peak, or local maximum, in the spectrum. For harmonic sounds, with this definition, the formant frequency is sometimes taken as that of the harmonic that is most augmented by a resonance. The difference between these two definitions resides in whether "formants" characterise the production mechanisms of a sound or the produced sound itself. In practice, the frequency of a spectral peak differs slightly from the associated resonance frequency, except when, by luck, harmonics are aligned with the resonance frequency, or when the sound source is mostly non-harmonic, as in whispering and vocal fry. A room can be said to have formants characteristic of that particular room, due to its resonances, i.e., to the way sound reflects from its walls and objects. Room formants of this nature reinforce themselves by emphasizing specific frequencies and absorbing others, as exploited, for example, by Alvin Lucier in his piece I Am Sitting in a Room. In acoustic digital signal processing, the way a collection of formants (such as a room) affects a signal can be represented by an impulse response. In both speech and rooms, formants are characteristic features of the resonances of the space. They are said to be excited by acoustic sources such as the voice, and they shape (filter) the sources' sounds, but they are not sources themselves. History From an acoustic point of view, phonetics had a serious problem with the idea that the effective length of vocal tract changed vowels. Indeed, when the length of the vocal tract changes, all the acoustic resonators formed by mouth cavities are scaled, and so are their resonance frequencies. Therefore, it was unclear how vowels could depend on frequencies when talkers with different vocal tract lengths, for instance bass and soprano singers, can produce sounds that are perceived as belonging to the same phonetic category. There had to be some way to normalize the spectral information underpinning the vowel identity. Hermann suggested a solution to this problem in 1894, coining the term “formant”. A vowel, according to him, is a special acoustic phenomenon, depending on the intermittent production of a special partial, or “formant”, or “characteristique” feature. The frequency of the “formant” may vary a little without altering the character of the vowel. For “long e” (ee or iy) for example, the lowest-frequency “formant” may vary from 350 to 440 Hz even in the same person. Phonetics Formants are distinctive frequency components of the acoustic signal produced by speech, musical instruments or singing. The information that humans require to distinguish between speech sounds can be represented purely quantitatively by specifying peaks in the frequency spectrum. Most of these formants are produced by tube and chamber resonance, but a few whistle tones derive from periodic collapse of Venturi effect low-pressure zones. The formant with the lowest frequency is called F1, the second F2, the third F3, and so forth. The fundamental frequency or pitch of the voice is sometimes referred to as F0, but it is not a formant. Most often the two first formants, F1 and F2, are sufficient to identify the vowel. The relationship between the perceived vowel quality and the first two formant frequencies can be appreciated by listening to "artificial vowels" that are generated by passing a click train (to simulate the glottal pulse train) through a pair of bandpass filters (to simulate vocal tract resonances). Front vowels have higher F2, while low vowels have higher F1. Lip rounding tends to lower F1 and F2 in back vowels and F2 and F3 in front vowels. Nasal consonants usually have an additional formant around 2500 Hz. The liquid usually has an extra formant at 1500 Hz, whereas the English "r" sound () is distinguished by a very low third formant (well below 2000 Hz). Plosives (and, to some degree, fricatives) modify the placement of formants in the surrounding vowels. Bilabial sounds (such as and in "ball" or "sap") cause a lowering of the formants; on spectrograms, velar sounds ( and in English) almost always show F2 and F3 coming together in a 'velar pinch' before the velar and separating from the same 'pinch' as the velar is released; alveolar sounds (English and ) cause fewer systematic changes in neighbouring vowel formants, depending partially on exactly which vowel is present. The time course of these changes in vowel formant frequencies are referred to as 'formant transitions'. In normal voiced speech, the underlying vibration produced by the vocal folds resembles a sawtooth wave, rich in harmonic overtones. If the fundamental frequency or (more often) one of the overtones is higher than a resonance frequency of the system, then the resonance will be only weakly excited and the formant usually imparted by that resonance will be mostly lost. This is most apparent in the case of soprano opera singers, who sing at pitches high enough that their vowels become very hard to distinguish. Control of resonances is an essential component of the vocal technique known as overtone singing, in which the performer sings a low fundamental tone, and creates sharp resonances to select upper harmonics, giving the impression of several tones being sung at once. Spectrograms may be used to visualise formants. In spectrograms, it can be hard to distinguish formants from naturally occurring harmonics when one sings. However, one can hear the natural formants in a vowel shape through atonal techniques such as vocal fry. Formant estimation Formants, whether they are seen as acoustic resonances of the vocal tract, or as local maxima in the speech spectrum, like band-pass filters, are defined by their frequency and by their spectral width (bandwidth). Different methods exist to obtain this information. Formant frequencies, in their acoustic definition, can be estimated from the frequency spectrum of the sound, using a spectrogram (in the figure) or a spectrum analyzer. However, to estimate the acoustic resonances of the vocal tract (i.e. the speech definition of formants) from a speech recording, one can use linear predictive coding. An intermediate approach consists in extracting the spectral envelope by neutralizing the fundamental frequency, and only then looking for local maxima in the spectral envelope. Formant plots The first two formants are important in determining the quality of vowels, and are frequently said to correspond to the open/close (or low/high) and front/back dimensions (which have traditionally been associated with the shape and position of the tongue). Thus the first formant F1 has a higher frequency for an open or low vowel such as and a lower frequency for a closed or high vowel such as or ; and the second formant F2 has a higher frequency for a front vowel such as and a lower frequency for a back vowel such as . Vowels will almost always have four or more distinguishable formants, and sometimes more than six. However, the first two formants are the most important in determining vowel quality and are often plotted against each other in vowel diagrams, though this simplification fails to capture some aspects of vowel quality such as rounding. Many writers have addressed the problem of finding an optimal alignment of the positions of vowels on formant plots with those on the conventional vowel quadrilateral. The pioneering work of Ladefoged used the Mel scale because this scale was claimed to correspond more closely to the auditory scale of pitch than to the acoustic measure of fundamental frequency expressed in Hertz. Two alternatives to the Mel scale are the Bark scale and the ERB-rate scale. Another widely adopted strategy is plotting the difference between F1 and F2 rather than F2 on the horizontal axis. Singer's formant Studies of the frequency spectrum of trained speakers and classical singers, especially male singers, indicate a clear formant around 3000 Hz (between 2800 and 3400 Hz) that is absent in speech or in the spectra of untrained speakers or singers. It is thought to be associated with one or more of the higher resonances of the vocal tract. It is this increase in energy at 3000 Hz which allows singers to be heard and understood over an orchestra. This formant is actively developed through vocal training, for instance through so-called voce di strega or "witch's voice" exercises and is caused by a part of the vocal tract acting as a resonator. In classical music and vocal pedagogy, this phenomenon is also known as squillo. See also Formant synthesis Human voice Linear predictive coding Praat Timbre Vocoder References External links Formants for fun and profit Formants and wah-wah pedals What is a formant? A discussion of the three different meanings of the word 'formant' Formant tuning by soprano singers from the University of New South Wales The acoustics of harmonic or overtone singing from the University of New South Wales Materials for measuring and plotting vowel formants Human voice Sound synthesis types Acoustics
Formant
[ "Physics" ]
1,898
[ "Classical mechanics", "Acoustics" ]
11,026
https://en.wikipedia.org/wiki/List%20of%20programmers
This is a list of programmers notable for their contributions to software, either as original author or architect, or for later additions. All entries must already have associated articles. Some persons notable as computer scientists are included here because they work in program as well as research. A Michael Abrash – program optimization and x86 assembly language Scott Adams – series of text adventures beginning in the late 1970s Tarn Adams – Dwarf Fortress Leonard Adleman – co-created RSA algorithm (being the A in that name), coined the term computer virus Alfred Aho – co-created AWK (being the A in that name), and main author of famous Compilers: Principles, Techniques, and Tools (Dragon book) Andrei Alexandrescu – author, expert on languages C++, D Paul Allen – Altair BASIC, Applesoft BASIC, cofounded Microsoft Eric Allman – sendmail, syslog Marc Andreessen – co-created Mosaic, cofounded Netscape Jeremy Ashkenas – CoffeeScript programming language and Backbone.js Bill Atkinson – QuickDraw, HyperCard Lennart Augustsson – languages (Lazy ML, Cayenne), compilers (HBC Haskell, parallel Haskell front end, Bluespec SystemVerilog early), LPMud pioneer, NetBSD device drivers B Roland Carl Backhouse – computer program construction, algorithmic problem solving, ALGOL John Backus – Fortran, BNF Lars Bak – virtual machine specialist Richard Bartle – MUD, with Roy Trubshaw, created MUDs Friedrich L. Bauer – Stack (data structure), Sequential Formula Translation, ALGOL, software engineering, Bauer–Fike theorem Kent Beck – created Extreme programming, cocreated JUnit Donald Becker – Linux Ethernet drivers, Beowulf clustering Brian Behlendorf – Apache HTTP Server Doug Bell – Dungeon Master series of video games Fabrice Bellard – created FFmpeg open codec library, QEMU virtualization tools Tim Berners-Lee – invented World Wide Web Daniel J. Bernstein – djbdns, qmail Eric Bina – cocreated Mosaic web browser Marc Blank – cocreated Zork Joshua Bloch – core Java language designer, lead the Java collections framework project Jonathan Blow – video games Braid and The Witness Susan G. Bond – cocreated ALGOL 68-R Grady Booch – cocreated Unified Modeling Language Bert Bos – authored Argo web browser, co-authored Cascading Style Sheets Stephen R. Bourne – cocreated ALGOL 68C, created Bourne shell David Bradley – coder on the IBM PC project team who wrote the Control-Alt-Delete keyboard handler, embedded in all PC-compatible BIOSes Andrew Braybrook – video games Paradroid and Uridium Larry Breed – implementation of Iverson Notation (APL), co-developed APL\360, Scientific Time Sharing Corporation cofounder Jack Elton Bresenham – created Bresenham's line algorithm Dan Bricklin – cocreated VisiCalc, the first personal spreadsheet program Walter Bright – Digital Mars, First C++ compiler, authored D (programming language) Sergey Brin – cofounded Google Inc. Per Brinch Hansen (surname "Brinch Hansen") – RC 4000 multiprogramming system, operating system kernels, microkernels, monitors, concurrent programming, Concurrent Pascal, distributed computing & processes, parallel computing Richard Brodie – Microsoft Word Andries Brouwer – Hack, former maintainer of man pager, Linux kernel hacker Danielle Bunten Berry (Dani Bunten) – M.U.L.E., multiplayer video game and other noted video games Rod Burstall – languages COWSEL (renamed POP-1), POP-2, NPL, Hope; ACM SIGPLAN 2009 PL Achievement Award Dries Buytaert – created Drupal C Steve Capps – cocreated Macintosh and Newton John Carmack – first-person shooters Doom, Quake Vint Cerf – TCP/IP, NCP Ward Christensen – wrote the first BBS (Bulletin Board System) system CBBS Edgar F. Codd – principal architect of relational model Bram Cohen – BitTorrent protocol design and implementation Alain Colmerauer – Prolog Richard W. Conway – compilers for CORC, CUPL, and PL/C; XCELL Factory Modelling System Alan Cooper – Visual Basic Mike Cowlishaw – REXX and NetRexx, LEXX editor, image processing, decimal arithmetic packages Alan Cox – co-developed Linux kernel Brad Cox – Objective-C Mark Crispin – created IMAP, authored UW-IMAP, one of reference implementations of IMAP4 William Crowther – Colossal Cave Adventure Ward Cunningham – created Wiki concept Dave Cutler – architected RSX-11M, OpenVMS, VAXELN, DEC MICA, Windows NT D Ole-Johan Dahl – cocreated Simula, object-oriented programming Ryan Dahl – created Node.js James Duncan Davidson – created Tomcat, now part of Jakarta Project Terry A. Davis – developer of TempleOS Jeff Dean – Spanner, Bigtable, MapReduce L. Peter Deutsch – Ghostscript, Assembler for PDP-1, XDS-940 timesharing system, QED original co-author Robert Dewar – IFIP WG 2.1 member, chairperson, ALGOL 68; AdaCore cofounder, president, CEO Edsger W. Dijkstra – contributions to ALGOL, Dijkstra's algorithm, Go To Statement Considered Harmful, IFIP WG 2.1 member Matt Dillon – programmed various software including DICE and DragonflyBSD Jack Dorsey – created Twitter Martin Dougiamas – creator and lead developed Moodle Adam Dunkels – authored Contiki operating system, the lwIP and uIP embedded TCP/IP stacks, invented protothreads E Les Earnest – authored finger program Alan Edelman – Edelman's Law, stochastic operator, Interactive Supercomputing, Julia (programming language) cocreator, high performance computing, numerical computing Brendan Eich – created JavaScript Larry Ellison – co-created Oracle Database, cofounded Oracle Corporation Andrey Ershov – languages ALPHA, Rapira; first Soviet time-sharing system AIST-0, electronic publishing system RUBIN, multiprocessing workstation MRAMOR, IFIP WG 2.1 member, Aesthetics and the Human Factor in Programming Marc Ewing – created Red Hat Linux F Scott Fahlman – created smiley face emoticon :-) Dan Farmer – created COPS and Security Administrator Tool for Analyzing Networks (SATAN) Security Scanners Steve Fawkner – created Warlords and Puzzle Quest Stuart Feldman – created make, authored Fortran 77 compiler, part of original group that created Unix David Filo – cocreated Yahoo! Brad Fitzpatrick – created memcached, Livejournal and OpenID Andrew Fluegelman – author PC-Talk communications software; considered a cocreated shareware Mahmoud Samir Fayed – created PWCT and Ring Martin Fowler – created the dependency injection pattern of software engineering, a form of inversion of control Brian Fox – created Bash, Readline, GNU Finger G Elon Gasper – cofounded Bright Star Technology, patented realistic facial movements for in-game speech; HyperAnimator, Alphabet Blocks, etc. Bill Gates – Altair BASIC, cofounded Microsoft Nick Gerakines – author, contributor to open-source Erlang projects Jim Gettys – X Window System, HTTP/1.1, One Laptop per Child, Bufferbloat Steve Gibson – created SpinRite John Gilmore – GNU Debugger (GDB) Adele Goldberg – cocreated Smalltalk Robert Griesemer – cocreated Go Ryan C. Gordon (a.k.a. Icculus) – Lokigames, ioquake3 James Gosling – Java, Gosling Emacs, NeWS Bill Gosper – Macsyma, Lisp machine, hashlife, helped Donald Knuth on Vol.2 of The Art of Computer Programming (Semi-numerical algorithms) Paul Graham – Yahoo! Store, On Lisp, ANSI Common Lisp John Graham-Cumming – authored POPFile, a Bayesian filter-based e-mail classifier David Gries – The book The Science of Programming, Interference freedom, Member Emeritus, IFIP Working Group 2.3 on Programming Methodology Ralph Griswold – cocreated SNOBOL, created Icon (programming language) Richard Greenblatt – Lisp machine, Incompatible Timesharing System, MacHack Neil J. Gunther – authored Pretty Damn Quick (PDQ) performance modeling program Scott Guthrie (a.k.a. ScottGu) – ASP.NET creator Jürg Gutknecht – with Niklaus Wirth: Lilith computer; Modula-2, Oberon, Zonnon programming languages; Oberon operating system Andi Gutmans – cocreated PHP programming language Michael Guy – Phoenix, work on number theory, computer algebra, higher dimension polyhedra theory, ALGOL 68C; work with John Horton Conway H Daniel Ha – cofounder and CEO of blog comment platform Disqus Nico Habermann – work on operating systems, software engineering, inter-process communication, process synchronization, deadlock avoidance, software verification, programming languages: ALGOL 60, BLISS, Pascal, Ada Jim Hall – started the FreeDOS project Margaret Hamilton – Director of Software Engineering Division of MIT Instrumentation Laboratory, which developed on-board flight software for the space Apollo program Brian Harris – machine translation research, Canada's first computer-assisted translation course, natural translation theory, community interpreting (Critical Link) Eric Hehner – predicative programming, formal methods, quote notation, ALGOL David Heinemeier Hansson – created the Ruby on Rails framework for developing web applications Rebecca Heineman – authored Bard's Tale III: Thief of Fate and Dragon Wars Gernot Heiser – operating system teaching, research, commercialising, Open Kernel Labs, OKL4, Wombat Anders Hejlsberg – Turbo Pascal, Delphi, C#, TypeScript Ted Henter – founded Henter-Joyce (now part of Freedom Scientific) created JAWS screen reader software for blind people Andy Hertzfeld – co-created Macintosh, cofounded General Magic, cofounded Eazel D. Richard Hipp – created SQLite C. A. R. Hoare – first implementation of quicksort, ALGOL 60 compiler, Communicating sequential processes Louis Hodes – Lisp, pattern recognition, logic programming, cancer research John Henry Holland – pioneer in what became known as genetic algorithms, developed Holland's schema theorem, Learning Classifier Systems Allen Holub – author and public speaker, Agile Manifesto signatory Grace Hopper – Harvard Mark I computer, FLOW-MATIC, COBOL Paul Hudak – Haskell language design, textbooks on it and computer music David A. Huffman – created the Huffman coding; a compression algorithm Roger Hui – created J Dave Hyatt – co-authored Mozilla Firefox P. J. Hyett – cofounded GitHub I Miguel de Icaza – GNOME project leader, initiated Mono project Roberto Ierusalimschy – Lua leading architect Dan Ingalls – cocreated Smalltalk and Bitblt Geir Ivarsøy – cocreated Opera web browser Ken Iverson – APL, J Toru Iwatani – created Pac-Man J Bo Jangeborg – ZX Spectrum games Paul Jardetzky – authored server program for the first webcam Rod Johnson – created Spring Framework, founded SpringSource Stephen C. Johnson – yacc Lynne Jolitz – 386BSD William Jolitz – 386BSD Bill Joy – BSD, csh, vi, cofounded Sun Microsystems Robert K. Jung – created ARJ K Poul-Henning Kamp – MD5 password hash algorithm, FreeBSD GEOM and GBDE, part of UFS2, FreeBSD Jails, malloc and the Beerware license Mitch Kapor – Lotus 1-2-3, founded Lotus Development Corporation Phil Katz – created Zip (file format), authored PKZIP Ted Kaehler – contributions to Smalltalk, Squeak, HyperCard Alan Kay – Smalltalk, Dynabook, Object-oriented programming, Squeak Mel Kaye – LGP-30 and RPC-4000 machine code programmer at Royal McBee in the 1950s, famed as "Real Programmer" in the Story of Mel Stan Kelly-Bootle – Manchester Mark 1, The Devil's DP Dictionary John Kemeny – cocreated BASIC Brian Kernighan – cocreated AWK (being the K in that name), authored ditroff text-formatting tool Gary Kildall – CP/M, MP/M, BIOS, PL/M, also known for work on data-flow analysis, binary recompilers, multitasking operating systems, graphical user interfaces, disk caching, CD-ROM file system and data structures, early multi-media technologies, founded Digital Research (DRI) Tom Knight – Incompatible Timesharing System Jim Knopf – a.k.a. Jim Button, author PC-File flatfile database; cocreated shareware Donald E. Knuth – TeX, CWEB, Metafont, The Art of Computer Programming, Concrete Mathematics Andrew R. Koenig – co-authored books on C and C++ and former Project Editor of ISO/ANSI standards committee for C++ Gennady Korotkevich - Competitive programmer, first to break the 3900 barrier on Codeforces Cornelis H. A. Koster – Report on the Algorithmic Language ALGOL 68, ALGOL 68 transput L Andre LaMothe – created XGameStation, one of world's first video game console development kits Leslie Lamport – LaTeX Butler Lampson – QED original co-author Peter Landin – ISWIM, J operator, SECD machine, off-side rule, syntactic sugar, ALGOL, IFIP WG 2.1 member Tom Lane – main author of libjpeg, major developer of PostgreSQL Sam Lantinga – created Simple DirectMedia Layer (SDL) Dick Lathwell – codeveloped APL\360 Chris Lattner – main author of LLVM project Samuel J. Leffler – BSD, FlexFAX, LibTIFF, FreeBSD Wireless Device Drivers Rasmus Lerdorf – original created PHP Michael Lesk – Lex Gordon Letwin – architected OS/2, authored High Performance File System (HPFS) Jochen Liedtke – microkernel operating systems Eumel, L3, L4 Charles H. Lindsey – IFIP WG 2.1 member, Revised Report on ALGOL 68 Håkon Wium Lie – co-authored Cascading Style Sheets Mike Little - co-authored WordPress Yanhong Annie Liu – programming languages, algorithms, program design, program optimization, software systems, optimizing, analysis, and transformations, intelligent systems, distributed computing, computer security, IFIP WG 2.1 member Robert Love – Linux kernel developer Ada Lovelace – first programmer (of Charles Babbages' Analytical Engine) Al Lowe – created Leisure Suit Larry series David Luckham – Lisp, Automated theorem proving, Stanford Pascal Verifier, Complex event processing, Rational Software cofounder (Ada compiler) Hans Peter Luhn – hash-coding, linked list, searching and sorting binary tree M Khaled Mardam-Bey – created mIRC (Internet Relay Chat Client) Simon Marlow – Haskell developer, book author; co-developer: Glasgow Haskell Compiler, Haxl remote data access library Robert C. Martin – authored Clean Code, The Clean Coder, leader of Clean Code movement, signatory on the Agile Manifesto John Mashey – authored PWB shell, also called Mashey shell Yukihiro Matsumoto "Matz" – Ruby language Conor McBride – researches type theory, functional programming; cocreated Epigram (programming language) with James McKinna; member IFIP Working Group 2.1 on Algorithmic Languages and Calculi John McCarthy – Lisp, ALGOL, IFIP WG 2.1 member, artificial intelligence Craig McClanahan – original author Jakarta Struts, architect of Tomcat Catalina servlet container Daniel D. McCracken – professor at City College and authored Guide to Algol Programming, Guide to Cobol Programming, Guide to Fortran Programming (1957) Scott A. McGregor – architect and development team lead of Microsoft Windows 1.0, co-authored X Window System version 11, and developed Cedar Viewers Windows System at Xerox PARC Douglas McIlroy – macros, pipes and filters, concept of software componentry, Unix tools (spell, diff, sort, join, graph, speak, tr, etc.) Marshall Kirk McKusick – Berkeley Software Distribution (BSD), work on FFS, implemented soft updates Sid Meier – author, Civilization and Railroad Tycoon, cofounded MicroProse Bertrand Meyer – Eiffel, Object-oriented Software Construction, design by contract Bob Miner – co-created Oracle Database, cofounded Oracle Corporation Jeff Minter – psychedelic, and often llama-related video games James G. Mitchell – WATFOR compiler, Mesa (programming language), Spring (operating system), ARM architecture Arvind Mithal – formal verification of large digital systems, developing dynamic dataflow architectures, parallel computing programming languages (Id, pH), compiling on parallel machines Petr Mitrichev – competitive programmer Cleve Moler – co-authored LINPACK, EISPACK, and MATLAB Lou Montulli – created Lynx browser, cookies, the blink tag, server push and client pull, HTTP proxying, HTTP over SSL, browser integration with animated GIFs, founding member of HTML working group at W3C Bram Moolenaar – authored text-editor Vim David A. Moon – Maclisp, ZetaLisp Charles H. Moore – created Forth language Roger Moore – co-developed APL\360, created IPSANET, cofounded I. P. Sharp Associates Matt Mullenweg – co-authored WordPress Boyd Munro – Australian developed GRASP, owns SDI, one of earliest software development companies Mike Muuss – authored ping, network tool to detect hosts N Patrick Naughton – early Java designer, HotJava Peter Naur (1928–2016) – Backus–Naur form (BNF), ALGOL 60, IFIP WG 2.1 member Fredrik Neij – cocreated The Pirate Bay Graham Nelson – created Inform authoring system for interactive fiction Greg Nelson (1953–2015) – satisfiability modulo theories, extended static checking, program verification, Modula-3 committee, Simplify theorem prover in ESC/Java Klára Dán von Neumann (1911–1963) – principal programmer for the MANIAC I Maurice Nivat (1937–2017) – theoretical computer science, Theoretical Computer Science journal, ALGOL, IFIP WG 2.1 member Peter Norton – programmed Norton Utilities Kristen Nygaard (1926–2002) – Simula, object-oriented programming O Ed Oates – cocreated Oracle Database, cofounded Oracle Corporation Martin Odersky – Scala Peter O'Hearn – separation logic, bunched logic, Infer Static Analyzer Jarkko Oikarinen – created Internet Relay Chat (IRC) Andrew and Philip Oliver, the Oliver Twins – many ZX Spectrum games including Dizzy John Ousterhout – created Tcl/Tk P Keith Packard – X Window System Larry Page – cofounded Google, Inc. Alexey Pajitnov – created game Tetris on Electronika 60 Seymour Papert – Logo (programming language) David Park (1935–1990) – first Lisp implementation, expert in fairness, program schemas, bisimulation in concurrent computing Mike Paterson – algorithms, analysis of algorithms (complexity) Tim Paterson – authored 86-DOS (QDOS) Markus Persson – created Minecraft Jeffrey Peterson – key free and open-source software architect, created Quepasa Charles Petzold – authored many Microsoft Windows programming books Simon Peyton Jones – functional programming, Glasgow Haskell Compiler, C-- Rob Pike – wrote first bitmapped window system for Unix, cocreated UTF-8 character encoding, authored text editor sam and programming environment acme, main author of Plan 9 and Inferno operating systems, co-authored Go (programming language) Kent Pitman – technical contributor to the ANSI Common Lisp standard Robin Popplestone – COWSEL (renamed POP-1), POP-2, POP-11 languages, Poplog IDE; Freddy II robot Tom Preston-Werner – cofounded GitHub R Theo de Raadt – founding member of NetBSD, founded OpenBSD and OpenSSH Brian Randell – ALGOL 60, software fault tolerance, dependability, pre-1950 history of computing hardware T. V. Raman – specializes in accessibility research (Emacspeak, ChromeVox (screen reader for Google Chrome) Jef Raskin – started the Macintosh project in Apple Computer, designed Canon Cat computer, developed Archy (The Humane Environment) program Eric S. Raymond – Open Source movement, authored fetchmail Hans Reiser – created ReiserFS file system John Resig – creator and lead developed jQuery JavaScript library Craig Reynolds – created boids computer graphics simulation John C. Reynolds – continuations, definitional interpreters, defunctionalization, Forsythe, Gedanken language, intersection types, polymorphic lambda calculus, relational parametricity, separation logic, ALGOL Reinder van de Riet – Editor: Europe of Data and Knowledge Engineering, COLOR-X event modeling language Dennis Ritchie – C, Unix, Plan 9 from Bell Labs, Inferno Ron Rivest – cocreated RSA algorithm (being the R in that name). created RC4 and MD5 John Romero – first-person shooters Doom, Quake Blake Ross – co-authored Mozilla Firefox Douglas T. Ross – Automatically Programmed Tools (APT), Computer-aided design, structured analysis and design technique, ALGOL X Guido van Rossum – Python Philip Rubin – articulatory synthesis (ASY), sinewave synthesis (SWS), and HADES signal processing system. Jeff Rulifson – lead programmer on the NLS project Rusty Russell – created iptables for linux Steve Russell – first Lisp interpreter; original Spacewar! graphic video game Mark Russinovich – Sysinternals.com, Filemon, Regmon, Process Explorer, TCPView and RootkitRevealer S Bob Sabiston – Rotoshop, interpolating rotoscope animation software Muni Sakya – Nepalese software Chris Sawyer – developed RollerCoaster Tycoon and the Transport Tycoon series Cher Scarlett – Apple, Webflow, Blizzard Entertainment, World Wide Technology, and USA Today Bob Scheifler – X Window System, Jini Isai Scheinberg – IBM engineer, founded PokerStars Bill Schelter – GNU Maxima, GNU Common Lisp John Scholes – Direct functions Randal L. Schwartz – Just another Perl hacker Adi Shamir – cocreated RSA algorithm (being the S in that name) Mike Shaver – founding member of Mozilla Organization Cliff Shaw – Information Processing Language (IPL), the first AI language Zed Shaw – wrote the Mongrel Web Server, for Ruby web applications Emily Short – prolific writer of Interactive fiction and co-developed Inform version 7 Jacek Sieka – developed DC++ an open-source, peer-to-peer file-sharing client Daniel Siewiorek – electronic design automation, reliability computing, context aware mobile computing, wearable computing, computer-aided design, rapid prototyping, fault tolerance Ken Silverman – created Duke Nukem 3Ds graphics engine Charles Simonyi – Hungarian notation, Bravo (the first WYSIWYG text editor), Microsoft Word Colin Simpson – developed CircuitLogix simulation software Rich Skrenta – cofounded DMOZ David Canfield Smith – invented interface icons, programming by demonstration, developed graphical user interface, Xerox Star; Xerox PARC researcher, cofounded Dest Systems, Cognition Matthew Smith – ZX Spectrum games, including Manic Miner and Jet Set Willy Henry Spencer – C News, Regex Joel Spolsky – cofounded Fog Creek Software and Stack Overflow Quentin Stafford-Fraser – authored original VNC viewer, first Windows VNC server, client program for the first webcam Richard Stallman – Emacs, GNU Compiler Collection (GCC), GDB, founder and pioneer of GNU Project, terminal-independent I/O pioneer on Incompatible Timesharing System (ITS), Lisp machine manual Guy L. Steele Jr. – Common Lisp, Scheme, Java Alexander Stepanov – created Standard Template Library Christopher Strachey – draughts playing program Ludvig Strigeus – created μTorrent, OpenTTD, ScummVM and the technology behind Spotify Bjarne Stroustrup – created C++ Zeev Suraski – cocreated PHP language Gerald Jay Sussman – Scheme Herb Sutter – chair of ISO C++ standards committee and C++ expert Gottfrid Svartholm – cocreated The Pirate Bay Aaron Swartz – software developer, writer, Internet activist Tim Sweeney – The Unreal engine, UnrealScript, ZZT T Amir Taaki – leading developer of Bitcoin project Andrew Tanenbaum – Minix Audrey "Autrijus" Tang – designed Pugs compiler–interpreter for Perl 6 (now Raku); Digital Affairs Minister, Taiwan 2022–2024 Simon Tatham – Netwide Assembler (NASM), PuTTY Larry Tesler – the Smalltalk code browser, debugger and object inspector, and (with Tim Mott) the Gypsy word processor Jon Stephenson von Tetzchner – cocreated Opera web browser Avie Tevanian – authored Mach kernel Ken Thompson – mainly designed and authored Unix, Plan 9 and Inferno operating systems, B and Bon languages (precursors of C), created UTF-8 character encoding, introduced regular expressions in QED and co-authored Go language Simon Thompson – functional programming research, textbooks; Cardano domain-specific languages: Marlowe Michael Tiemann – G++, GNU Compiler Collection (GCC) Linus Torvalds – original author and current maintainer of Linux kernel and created Git, a source code management system Andrew Tridgell – Samba, Rsync Roy Trubshaw – MUD – together with Richard Bartle, created MUDs Bob Truel – cofounded DMOZ Alan Turing – mathematician, computer scientist and cryptanalyst David Turner – SASL, Kent Recursive Calculator, Miranda, IFIP WG 2.1 member V Wietse Venema – Postfix, Security Administrator Tool for Analyzing Networks (SATAN), TCP Wrapper Bernard Vauquois – pioneered computer science in France, machine translation (MT) theory and practice including Vauquois triangle, ALGOL 60 Pat Villani – original author FreeDOS/DOS-C kernel, maintainer of defunct Linux for Windows 9x distribution Paul Vixie – BIND, Cron Patrick Volkerding – original author and current maintainer of Slackware Linux Distribution W Eiiti Wada – ALGOL N, IFIP WG 2.1 member, Japanese Industrial Standards (JIS) X 0208, 0212, Happy Hacking Keyboard John Walker – cofounded Autodesk Larry Wall – Warp (1980s space-war game), rn, patch, Perl Bob Wallace – author PC-Write word processor; considered shareware cocreator Chris Wanstrath – cofounded GitHub, created the Atom (text editor) and the Mustache template system John Warnock – created PostScript Robert Watson – FreeBSD network stack parallelism, TrustedBSD project and OpenBSM Joseph Henry Wegstein – ALGOL 58, ALGOL 60, IFIP WG 2.1 member, data processing technical standards, fingerprint analysis Pei-Yuan Wei – authored ViolaWWW, one of earliest graphical browsers Peter J. Weinberger – cocreated AWK (being the W in that name) Jim Weirich – created Rake, Builder, and RubyGems for Ruby; popular teacher and conference speaker Joseph Weizenbaum – created ELIZA David Wheeler – cocreated subroutine; designed WAKE; co-designed Tiny Encryption Algorithm, XTEA, Burrows–Wheeler transform Molly White – HubSpot; creator of Web3 Is Going Just Great Arthur Whitney – A+, K why the lucky stiff – created libraries and writing for Ruby, including quirky, popular Why's (poignant) Guide to Ruby to teach programming Adriaan van Wijngaarden – Dutch pioneer; ARRA, ALGOL, IFIP WG 2.1 member Bruce Wilcox – created Computer Go, programmed NEMESIS Go Master Evan Williams – created and cofounded language Logo Roberta and Ken Williams – Sierra Entertainment, King's Quest, graphic adventure game Sophie Wilson – designed instruction set for Acorn RISC Machine, authored BBC BASIC Dave Winer – developed XML-RPC, Frontier scripting language Niklaus Wirth – ALGOL W, IFIP WG 2.1 member, Pascal, Modula-2, Oberon Stephen Wolfram – created Mathematica Don Woods – INTERCAL, Colossal Cave Adventure Philip Woodward – ambiguity function, sinc function, comb operator, rep operator, ALGOL 68-R Steve Wozniak – Breakout, Apple Integer BASIC, cofounded Apple Inc. Will Wright – created the Sim City series, cofounded Maxis William Wulf – BLISS system programming language + optimizing compiler, Hydra operating system, Tartan Laboratories Y Jerry Yang – co-created Yahoo! Victor Yngve – authored first string processing language, COMIT Nobuo Yoneda – Yoneda lemma, Yoneda product, ALGOL, IFIP WG 2.1 member Z Matei Zaharia – created Apache Spark Jamie Zawinski – Lucid Emacs, Netscape Navigator, Mozilla, XScreenSaver Phil Zimmermann – created encryption software PGP, the ZRTP protocol, and Zfone Mark Zuckerberg – created Facebook See also List of computer scientists List of computing people List of members of the National Academy of Sciences (computer and information sciences) List of pioneers in computer science List of programming language researchers List of Russian programmers List of video game industry people (programming) ! Programmers Computer Programmers
List of programmers
[ "Technology" ]
6,395
[ "Computing-related lists", "Lists of computer scientists" ]
11,034
https://en.wikipedia.org/wiki/Fluid%20dynamics
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including (the study of air and other gases in motion) and (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time. Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases. Equations The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem. In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored. For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state: where is pressure, is density, and is the absolute temperature, while is the gas constant and is molar mass for a particular gas. A constitutive relation may also be useful. Conservation laws Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow. Classifications Compressible versus incompressible flow All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used. Mathematically, incompressibility is expressed by saying that the density of a fluid parcel does not change as it moves in the flow field, that is, where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density. For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate. Newtonian versus non-Newtonian fluids All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions . Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate. Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants. Inviscid versus viscous versus Stokes flow The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects. The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number () indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow. In contrast, high Reynolds numbers () indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression. This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox. A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions. Steady versus unsteady flow A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady. Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow. Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field. Laminar versus turbulent flow Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component. It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows. Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human ( > 3 m), moving faster than is well beyond the limit of DNS simulation ( = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling. Other approximations There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below. The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small. Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected. Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid. The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small. Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths. In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics. Multidisciplinary types Flows according to Mach regimes While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately. Reactive versus non-reactive flows Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics. Magnetohydrodynamics Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism. Relativistic fluid dynamics Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime. Fluctuating hydrodynamics This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model thermal fluctuations. As formulated by Landau and Lifshitz, a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics is added to the viscous stress tensor and heat flux. Terminology The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods. Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics. Characteristic numbers Terminology in incompressible fluid dynamics The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field. A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field. Terminology in compressible fluid dynamics In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion. To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference. Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy". See also List of publications in fluid dynamics List of fluid dynamicists References Further reading Originally published in 1879, the 6th extended edition appeared first in 1932. Originally published in 1938. Encyclopedia: Fluid dynamics Scholarpedia External links National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format) Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society List of Fluid Dynamics books Piping Aerodynamics Continuum mechanics
Fluid dynamics
[ "Physics", "Chemistry", "Engineering" ]
3,710
[ "Continuum mechanics", "Building engineering", "Chemical engineering", "Classical mechanics", "Aerodynamics", "Mechanical engineering", "Aerospace engineering", "Piping", "Fluid dynamics" ]
11,036
https://en.wikipedia.org/wiki/Fin
A fin is a thin component or appendage attached to a larger body or structure. Fins typically function as foils that produce lift or thrust, or provide the ability to steer or stabilize motion while traveling in water, air, or other fluids. Fins are also used to increase surface areas for heat transfer purposes, or simply as ornamentation. Fins first evolved on fish as a means of locomotion. Fish fins are used to generate thrust and control the subsequent motion. Fish and other aquatic animals, such as cetaceans, actively propel and steer themselves with pectoral and tail fins. As they swim, they use other fins, such as dorsal and anal fins, to achieve stability and refine their maneuvering. The fins on the tails of cetaceans, ichthyosaurs, metriorhynchids, mosasaurs and plesiosaurs are called flukes. Thrust generation Foil shaped fins generate thrust when moved, the lift of the fin sets water or air in motion and pushes the fin in the opposite direction. Aquatic animals get significant thrust by moving fins back and forth in water. Often the tail fin is used, but some aquatic animals generate thrust from pectoral fins. Fins can also generate thrust if they are rotated in air or water. Turbines and propellers (and sometimes fans and pumps) use a number of rotating fins, also called foils, wings, arms or blades. Propellers use the fins to translate torquing force to lateral thrust, thus propelling an aircraft or ship. Turbines work in reverse, using the lift of the blades to generate torque and power from moving gases or water. Cavitation can be a problem with high power applications, resulting in damage to propellers or turbines, as well as noise and loss of power. Cavitation occurs when negative pressure causes bubbles (cavities) to form in a liquid, which then promptly and violently collapse. It can cause significant damage and wear. Cavitation damage can also occur to the tail fins of powerful swimming marine animals, such as dolphins and tuna. Cavitation is more likely to occur near the surface of the ocean, where the ambient water pressure is relatively low. Even if they have the power to swim faster, dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are too painful. Cavitation also slows tuna, but for a different reason. Unlike dolphins, these fish do not feel the bubbles, because they have bony fins without nerve endings. Nevertheless, they cannot swim faster because the cavitation bubbles create a vapor film around their fins that limits their speed. Lesions have been found on tuna that are consistent with cavitation damage. Scombrid fishes (tuna, mackerel and bonito) are particularly high-performance swimmers. Along the margin at the rear of their bodies is a line of small rayless, non-retractable fins, known as finlets. There has been much speculation about the function of these finlets. Research done in 2000 and 2001 by Nauen and Lauder indicated that "the finlets have a hydrodynamic effect on local flow during steady swimming" and that "the most posterior finlet is oriented to redirect flow into the developing tail vortex, which may increase thrust produced by the tail of swimming mackerel". Fish use multiple fins, so it is possible that a given fin can have a hydrodynamic interaction with another fin. In particular, the fins immediately upstream of the caudal (tail) fin may be proximate fins that can directly affect the flow dynamics at the caudal fin. In 2011, researchers using volumetric imaging techniques were able to generate "the first instantaneous three-dimensional views of wake structures as they are produced by freely swimming fishes". They found that "continuous tail beats resulted in the formation of a linked chain of vortex rings" and that "the dorsal and anal fin wakes are rapidly entrained by the caudal fin wake, approximately within the timeframe of a subsequent tail beat". Motion control Once motion has been established, the motion itself can be controlled with the use of other fins. Boats control direction (yaw) with fin-like rudders, and roll with stabilizer and keel fins. Airplanes achieve similar results with small specialised fins that change the shape of their wings and tail fins. Stabilising fins are used as fletching on arrows and some darts, and at the rear of some bombs, missiles, rockets and self-propelled torpedoes. These are typically planar and shaped like small wings, although grid fins are sometimes used. Static fins have also been used for one satellite, GOCE. Temperature regulation Engineering fins are also used as heat transfer fins to regulate temperature in heat sinks or fin radiators. Ornamentation and other uses In biology, fins can have an adaptive significance as sexual ornaments. During courtship, the female cichlid, Pelvicachromis taeniatus, displays a large and visually arresting purple pelvic fin. "The researchers found that males clearly preferred females with a larger pelvic fin and that pelvic fins grew in a more disproportionate way than other fins on female fish." Reshaping human feet with swim fins, rather like the tail fin of a fish, add thrust and efficiency to the kicks of a swimmer or underwater diver Surfboard fins provide surfers with means to maneuver and control their boards. Contemporary surfboards often have a centre fin and two cambered side fins. The bodies of reef fishes are often shaped differently from open water fishes. Open water fishes are usually built for speed, streamlined like torpedoes to minimise friction as they move through the water. Reef fish operate in the relatively confined spaces and complex underwater landscapes of coral reefs. For this manoeuvrability is more important than straight line speed, so coral reef fish have developed bodies which optimize their ability to dart and change direction. They outwit predators by dodging into fissures in the reef or playing hide and seek around coral heads. The pectoral and pelvic fins of many reef fish, such as butterflyfish, damselfish and angelfish, have evolved so they can act as brakes and allow complex maneuvers. Many reef fish, such as butterflyfish, damselfish and angelfish, have evolved bodies which are deep and laterally compressed like a pancake, and will fit into fissures in rocks. Their pelvic and pectoral fins are designed differently, so they act together with the flattened body to optimise maneuverability. Some fishes, such as puffer fish, filefish and trunkfish, rely on pectoral fins for swimming and hardly use tail fins at all. Evolution There is an old theory, proposed by anatomist Carl Gegenbaur, which has been often disregarded in science textbooks, "that fins and (later) limbs evolved from the gills of an extinct vertebrate". Gaps in the fossil record had not allowed a definitive conclusion. In 2009, researchers from the University of Chicago found evidence that the "genetic architecture of gills, fins and limbs is the same", and that "the skeleton of any appendage off the body of an animal is probably patterned by the developmental genetic program that we have traced back to formation of gills in sharks". Recent studies support the idea that gill arches and paired fins are serially homologous and thus that fins may have evolved from gill tissues. Fish are the ancestors of all mammals, reptiles, birds and amphibians. In particular, terrestrial tetrapods (four-legged animals) evolved from fish and made their first forays onto land 400 million years ago. They used paired pectoral and pelvic fins for locomotion. The pectoral fins developed into forelegs (arms in the case of humans) and the pelvic fins developed into hind legs. Much of the genetic machinery that builds a walking limb in a tetrapod is already present in the swimming fin of a fish. In 2011, researchers at Monash University in Australia used primitive but still living lungfish "to trace the evolution of pelvic fin muscles to find out how the load-bearing hind limbs of the tetrapods evolved." Further research at the University of Chicago found bottom-walking lungfishes had already evolved characteristics of the walking gaits of terrestrial tetrapods. In a classic example of convergent evolution, the pectoral limbs of pterosaurs, birds and bats further evolved along independent paths into flying wings. Even with flying wings there are many similarities with walking legs, and core aspects of the genetic blueprint of the pectoral fin have been retained. About 200 million years ago the first mammals appeared. A group of these mammals started returning to the sea about 52 million years ago, thus completing a circle. These are the cetaceans (whales, dolphins and porpoises). Recent DNA analysis suggests that cetaceans evolved from within the even-toed ungulates, and that they share a common ancestor with the hippopotamus. About 23 million years ago another group of bearlike land mammals started returning to the sea. These were the pinnipeds (seals). What had become walking limbs in cetaceans and seals evolved further, independently in a reverse form of convergent evolution, back to new forms of swimming fins. The forelimbs became flippers and, in pinnipeds, the hind limbs became a tail terminating in two fins (the cetacean fluke, conversely, is an entirely new organ). Fish tails are usually vertical and move from side to side. Cetacean flukes are horizontal and move up and down, because cetacean spines bend the same way as in other mammals. Ichthyosaurs are ancient reptiles that resembled dolphins. They first appeared about 245 million years ago and disappeared about 90 million years ago. "This sea-going reptile with terrestrial ancestors converged so strongly on fishes that it actually evolved a dorsal fin and tail in just the right place and with just the right hydrological design. These structures are all the more remarkable because they evolved from nothing — the ancestral terrestrial reptile had no hump on its back or blade on its tail to serve as a precursor." The biologist Stephen Jay Gould said the ichthyosaur was his favorite example of convergent evolution. Robotics The use of fins for the propulsion of aquatic animals can be remarkably effective. It has been calculated that some fish can achieve a propulsive efficiency greater than 90%. Fish can accelerate and maneuver much more effectively than boats or submarine, and produce less water disturbance and noise. This has led to biomimetic studies of underwater robots which attempt to emulate the locomotion of aquatic animals. An example is the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion. In 2005, the Sea Life London Aquarium displayed three robotic fish created by the computer science department at the University of Essex. The fish were designed to be autonomous, swimming around and avoiding obstacles like real fish. Their creator claimed that he was trying to combine "the speed of tuna, acceleration of a pike, and the navigating skills of an eel". The AquaPenguin, developed by Festo of Germany, copies the streamlined shape and propulsion by front flippers of penguins. Festo also developed AquaRay, AquaJelly and AiraCuda, respectively emulating the locomotion of manta rays, jellyfish and barracuda. In 2004, Hugh Herr at MIT prototyped a biomechatronic robotic fish with a living actuator by surgically transplanting muscles from frog legs to the robot and then making the robot swim by pulsing the muscle fibers with electricity. Robotic fish offer some research advantages, such as the ability to examine part of a fish design in isolation from the rest, and variance of a single parameter, such as flexibility or direction. Researchers can directly measure forces more easily than in live fish. "Robotic devices also facilitate three-dimensional kinematic studies and correlated hydrodynamic analyses, as the location of the locomotor surface can be known accurately. And, individual components of a natural motion (such as outstroke vs. instroke of a flapping appendage) can be programmed separately, which is certainly difficult to achieve when working with a live animal." See also Aquatic locomotion Fin and flipper locomotion Fish locomotion Robot locomotion RoboTuna Sail (submarine) Surfboard fin References Further reading Blake, Robert William (1983) Fish Locomotion CUP Archive. . Tangorra JL, CEsposito CJ and Lauder GV (2009) "Biorobotic fins for investigations of fish locomotion" In: Intelligent Robots and Systems, pages: 2120–2125. E-. Tu X and Terzopoulos D (1994) "Artificial fishes: Physics, locomotion, perception, behavior" In: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 43–50. . External links Locomotion in Fish Earthlife. Computational fluid dynamics tutorial Many examples and images, with references to robotic fish. Fish Skin Research University of British Columbia. A fin-tuned design The Economist, 19 November 2008. Animal anatomy Watercraft components Rocketry
Fin
[ "Engineering" ]
2,755
[ "Rocketry", "Aerospace engineering" ]
11,042
https://en.wikipedia.org/wiki/Fat
In nutrition, biology, and chemistry, fat usually means any ester of fatty acids, or a mixture of such compounds, most commonly those that occur in living beings or in food. The term often refers specifically to triglycerides (triple esters of glycerol), that are the main components of vegetable oils and of fatty tissue in animals; or, even more narrowly, to triglycerides that are solid or semisolid at room temperature, thus excluding oils. The term may also be used more broadly as a synonym of lipid—any substance of biological relevance, composed of carbon, hydrogen, or oxygen, that is insoluble in water but soluble in non-polar solvents. In this sense, besides the triglycerides, the term would include several other types of compounds like mono- and diglycerides, phospholipids (such as lecithin), sterols (such as cholesterol), waxes (such as beeswax), and free fatty acids, which are usually present in human diet in smaller amounts. Fats are one of the three main macronutrient groups in human diet, along with carbohydrates and proteins, and the main components of common food products like milk, butter, tallow, lard, salt pork, and cooking oils. They are a major and dense source of food energy for many animals and play important structural and metabolic functions in most living beings, including energy storage, waterproofing, and thermal insulation. The human body can produce the fat it requires from other food ingredients, except for a few essential fatty acids that must be included in the diet. Dietary fats are also the carriers of some flavor and aroma ingredients and vitamins that are not water-soluble. Biological importance In humans and many animals, fats serve both as energy sources and as stores for energy in excess of what the body needs immediately. Each gram of fat when burned or metabolized releases about nine food calories (37 kJ = 8.8 kcal). Fats are also sources of essential fatty acids, an important dietary requirement. Vitamins A, D, E, and K are fat-soluble, meaning they can only be digested, absorbed, and transported in conjunction with fats. Fats play a vital role in maintaining healthy skin and hair, insulating body organs against shock, maintaining body temperature, and promoting healthy cell function. Fat also serves as a useful buffer against a host of diseases. When a particular substance, whether chemical or biotic, reaches unsafe levels in the bloodstream, the body can effectively dilute—or at least maintain equilibrium of—the offending substances by storing it in new fat tissue. This helps to protect vital organs, until such time as the offending substances can be metabolized or removed from the body by such means as excretion, urination, accidental or intentional bloodletting, sebum excretion, and hair growth. Adipose tissue In animals, adipose tissue, or fatty tissue is the body's means of storing metabolic energy over extended periods of time. Adipocytes (fat cells) store fat derived from the diet and from liver metabolism. Under energy stress these cells may degrade their stored fat to supply fatty acids and also glycerol to the circulation. These metabolic activities are regulated by several hormones (e.g., insulin, glucagon and epinephrine). Adipose tissue also secretes the hormone leptin. Production and processing A variety of chemical and physical techniques are used for the production and processing of fats, both industrially and in cottage or home settings. They include: Pressing to extract liquid fats from fruits, seeds, or algae, e.g. olive oil from olives Solvent extraction using solvents like hexane or supercritical carbon dioxide Rendering, the melting of fat in adipose tissue, e.g. to produce tallow, lard, fish oil, and whale oil Churning of milk to produce butter Hydrogenation to increase the degree of saturation of the fatty acids Interesterification, the rearrangement of fatty acids across different triglycerides Winterization to remove oil components with higher melting points Clarification of butter Metabolism The pancreatic lipase acts at the ester bond, hydrolyzing the bond and "releasing" the fatty acid. In triglyceride form, lipids cannot be absorbed by the duodenum. Fatty acids, monoglycerides (one glycerol, one fatty acid), and some diglycerides are absorbed by the duodenum, once the triglycerides have been broken down. In the intestine, following the secretion of lipases and bile, triglycerides are split into monoacylglycerol and free fatty acids in a process called lipolysis. They are subsequently moved to absorptive enterocyte cells lining the intestines. The triglycerides are rebuilt in the enterocytes from their fragments and packaged together with cholesterol and proteins to form chylomicrons. These are excreted from the cells and collected by the lymph system and transported to the large vessels near the heart before being mixed into the blood. Various tissues can capture the chylomicrons, releasing the triglycerides to be used as a source of energy. Liver cells can synthesize and store triglycerides. When the body requires fatty acids as an energy source, the hormone glucagon signals the breakdown of the triglycerides by hormone-sensitive lipase to release free fatty acids. As the brain cannot utilize fatty acids as an energy source (unless converted to a ketone), the glycerol component of triglycerides can be converted into glucose, via gluconeogenesis by conversion into dihydroxyacetone phosphate and then into glyceraldehyde 3-phosphate, for brain fuel when it is broken down. Fat cells may also be broken down for that reason if the brain's needs ever outweigh the body's. Triglycerides cannot pass through cell membranes freely. Special enzymes on the walls of blood vessels called lipoprotein lipases must break down triglycerides into free fatty acids and glycerol. Fatty acids can then be taken up by cells via fatty acid transport proteins (FATPs). Triglycerides, as major components of very-low-density lipoprotein (VLDL) and chylomicrons, play an important role in metabolism as energy sources and transporters of dietary fat. They contain more than twice as much energy (approximately 9kcal/g or 38kJ/g) as carbohydrates (approximately 4kcal/g or 17kJ/g). Nutritional and health aspects The most common type of fat, in human diet and most living beings, is a triglyceride, an ester of the triple alcohol glycerol and three fatty acids. The molecule of a triglyceride can be described as resulting from a condensation reaction (specifically, esterification) between each of glycerol's –OH groups and the HO– part of the carboxyl group of each fatty acid, forming an ester bridge with elimination of a water molecule . Other less common types of fats include diglycerides and monoglycerides, where the esterification is limited to two or just one of glycerol's –OH groups. Other alcohols, such as cetyl alcohol (predominant in spermaceti), may replace glycerol. In the phospholipids, one of the fatty acids is replaced by phosphoric acid or a monoester thereof. The benefits and risks of various amounts and types of dietary fats have been the object of much study, and are still highly controversial topics. Essential fatty acids There are two essential fatty acids (EFAs) in human nutrition: alpha-Linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid). The adult body can synthesize other lipids that it needs from these two. Dietary sources Saturated vs. unsaturated fats Different foods contain different amounts of fat with different proportions of saturated and unsaturated fatty acids. Some animal products, like beef and dairy products made with whole or reduced fat milk like yogurt, ice cream, cheese and butter have mostly saturated fatty acids (and some have significant contents of dietary cholesterol). Other animal products, like pork, poultry, eggs, and seafood have mostly unsaturated fats. Industrialized baked goods may use fats with high unsaturated fat contents as well, especially those containing partially hydrogenated oils, and processed foods that are deep-fried in hydrogenated oil are high in saturated fat content. Plants and fish oil generally contain a higher proportion of unsaturated acids, although there are exceptions such as coconut oil and palm kernel oil. Foods containing unsaturated fats include avocado, nuts, olive oils, and vegetable oils such as canola. Many scientific studies have found that replacing saturated fats with cis unsaturated fats in the diet reduces risk of cardiovascular diseases (CVDs), diabetes, or death. These studies prompted many medical organizations and public health departments, including the World Health Organization (WHO), to officially issue that advice. Some countries with such recommendations include: United Kingdom United States India Canada Australia Singapore New Zealand Hong Kong A 2004 review concluded that "no lower safe limit of specific saturated fatty acid intakes has been identified" and recommended that the influence of varying saturated fatty acid intakes against a background of different individual lifestyles and genetic backgrounds should be the focus in future studies. This advice is often oversimplified by labeling the two kinds of fats as bad fats and good fats, respectively. However, since the fats and oils in most natural and traditionally processed foods contain both unsaturated and saturated fatty acids, the complete exclusion of saturated fat is unrealistic and possibly unwise. For instance, some foods rich in saturated fat, such as coconut and palm oil, are an important source of cheap dietary calories for a large fraction of the population in developing countries. Concerns were also expressed at a 2010 conference of the American Dietetic Association that a blanket recommendation to avoid saturated fats could drive people to also reduce the amount of polyunsaturated fats, which may have health benefits, and/or replace fats by refined carbohydrates — which carry a high risk of obesity and heart disease. For these reasons, the U.S. Food and Drug Administration, for example, recommends to consume less than 10% (7% for high-risk groups) of calories from saturated fat, with 15-30% of total calories from all fat. A general 7% limit was recommended also by the American Heart Association (AHA) in 2006. The WHO/FAO report also recommended replacing fats so as to reduce the content of myristic and palmitic acids, specifically. The so-called Mediterranean diet, prevalent in many countries in the Mediterranean Sea area, includes more total fat than the diet of Northern European countries, but most of it is in the form of unsaturated fatty acids (specifically, monounsaturated and omega-3) from olive oil and fish, vegetables, and certain meats like lamb, while consumption of saturated fat is minimal in comparison. A 2017 review found evidence that a Mediterranean-style diet could reduce the risk of cardiovascular diseases, overall cancer incidence, neurodegenerative diseases, diabetes, and mortality rate. A 2018 review showed that a Mediterranean-like diet may improve overall health status, such as reduced risk of non-communicable diseases. It also may reduce the social and economic costs of diet-related illnesses. A small number of contemporary reviews have challenged this negative view of saturated fats. For example, an evaluation of evidence from 1966 to 1973 of the observed health impact of replacing dietary saturated fat with linoleic acid found that it increased rates of death from all causes, coronary heart disease, and cardiovascular disease. These studies have been disputed by many scientists, and the consensus in the medical community is that saturated fat and cardiovascular disease are closely related. Still, these discordant studies fueled debate over the merits of substituting polyunsaturated fats for saturated fats. Cardiovascular disease The effect of saturated fat on cardiovascular disease has been extensively studied. The general consensus is that there is evidence of moderate-quality of a strong, consistent, and graded relationship between saturated fat intake, blood cholesterol levels, and the incidence of cardiovascular disease. The relationships are accepted as causal, including by many government and medical organizations. A 2017 review by the AHA estimated that replacement of saturated fat with polyunsaturated fat in the American diet could reduce the risk of cardiovascular diseases by 30%. The consumption of saturated fat is generally considered a risk factor for dyslipidemia—abnormal blood lipid levels, including high total cholesterol, high levels of triglycerides, high levels of low-density lipoprotein (LDL, "bad" cholesterol) or low levels of high-density lipoprotein (HDL, "good" cholesterol). These parameters in turn are believed to be risk indicators for some types of cardiovascular disease. These effects were observed in children too. Several meta-analyses (reviews and consolidations of multiple previously published experimental studies) have confirmed a significant relationship between saturated fat and high serum cholesterol levels, which in turn have been claimed to have a causal relation with increased risk of cardiovascular disease (the so-called lipid hypothesis). However, high cholesterol may be caused by many factors. Other indicators, such as high LDL/HDL ratio, have proved to be more predictive. In a study of myocardial infarction in 52 countries, the ApoB/ApoA1 (related to LDL and HDL, respectively) ratio was the strongest predictor of CVD among all risk factors. There are other pathways involving obesity, triglyceride levels, insulin sensitivity, endothelial function, and thrombogenicity, among others, that play a role in CVD, although it seems, in the absence of an adverse blood lipid profile, the other known risk factors have only a weak atherogenic effect. Different saturated fatty acids have differing effects on various lipid levels. Cancer The evidence for a relation between saturated fat intake and cancer is significantly weaker, and there does not seem to be a clear medical consensus about it. Several reviews of case–control studies have found that saturated fat intake is associated with increased breast cancer risk. Another review found limited evidence for a positive relationship between consuming animal fat and incidence of colorectal cancer. Other meta-analyses found evidence for increased risk of ovarian cancer by high consumption of saturated fat. Some studies have indicated that serum myristic acid and palmitic acid and dietary myristic and palmitic saturated fatty acids and serum palmitic combined with alpha-tocopherol supplementation are associated with increased risk of prostate cancer in a dose-dependent manner. These associations may, however, reflect differences in intake or metabolism of these fatty acids between the precancer cases and controls, rather than being an actual cause. Bones Various animal studies have indicated that the intake of saturated fat has a negative effect on the mineral density of bones. One study suggested that men may be particularly vulnerable. Disposition and overall health Studies have shown that substituting monounsaturated fatty acids for saturated ones is associated with increased daily physical activity and resting energy expenditure. More physical activity, less anger, and less irritability were associated with a higher-oleic acid diet than one of a palmitic acid diet. Monounsaturated vs. polyunsaturated fat The most common fatty acids in human diet are unsaturated or mono-unsaturated. Monounsaturated fats are found in animal flesh such as red meat, whole milk products, nuts, and high fat fruits such as olives and avocados. Olive oil is about 75% monounsaturated fat. The high oleic variety sunflower oil contains at least 70% monounsaturated fat. Canola oil and cashews are both about 58% monounsaturated fat. Tallow (beef fat) is about 50% monounsaturated fat, and lard is about 40% monounsaturated fat. Other sources include hazelnut, avocado oil, macadamia nut oil, grapeseed oil, groundnut oil (peanut oil), sesame oil, corn oil, popcorn, whole grain wheat, cereal, oatmeal, almond oil, hemp oil, and tea-oil camellia. Polyunsaturated fatty acids can be found mostly in nuts, seeds, fish, seed oils, and oysters. Food sources of polyunsaturated fats include: Insulin resistance and sensitivity MUFAs (especially oleic acid) have been found to lower the incidence of insulin resistance; PUFAs (especially large amounts of arachidonic acid) and SFAs (such as arachidic acid) increased it. These ratios can be indexed in the phospholipids of human skeletal muscle and in other tissues as well. This relationship between dietary fats and insulin resistance is presumed secondary to the relationship between insulin resistance and inflammation, which is partially modulated by dietary fat ratios (omega−3/6/9) with both omega−3 and −9 thought to be anti-inflammatory, and omega−6 pro-inflammatory (as well as by numerous other dietary components, particularly polyphenols and exercise, with both of these anti-inflammatory). Although both pro- and anti-inflammatory types of fat are biologically necessary, fat dietary ratios in most US diets are skewed towards omega−6, with subsequent disinhibition of inflammation and potentiation of insulin resistance. This is contrary to the suggestion that polyunsaturated fats are shown to be protective against insulin resistance. The large scale KANWU study found that increasing MUFA and decreasing SFA intake could improve insulin sensitivity, but only when the overall fat intake of the diet was low. However, some MUFAs may promote insulin resistance (like the SFAs), whereas PUFAs may protect against it. Cancer Levels of oleic acid along with other MUFAs in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. MUFAs and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d). Results from observational clinical trials on PUFA intake and cancer have been inconsistent and vary by numerous factors of cancer incidence, including gender and genetic risk. Some studies have shown associations between higher intakes and/or blood levels of omega-3 PUFAs and a decreased risk of certain cancers, including breast and colorectal cancer, while other studies found no associations with cancer risk. Pregnancy disorders Polyunsaturated fat supplementation was found to have no effect on the incidence of pregnancy-related disorders, such as hypertension or preeclampsia, but may increase the length of gestation slightly and decreased the incidence of early premature births. Expert panels in the United States and Europe recommend that pregnant and lactating women consume higher amounts of polyunsaturated fats than the general population to enhance the DHA status of the fetus and newborn. "Cis fat" vs. "trans fat" In nature, unsaturated fatty acids generally have double bonds in cis configuration (with the adjacent C–C bonds on the same side) as opposed to trans. Nevertheless, trans fatty acids (TFAs) occur in small amounts in meat and milk of ruminants (such as cattle and sheep), typically 2–5% of total fat. Natural TFAs, which include conjugated linoleic acid (CLA) and vaccenic acid, originate in the rumen of these animals. CLA has two double bonds, one in the cis configuration and one in trans, which makes it simultaneously a cis- and a trans-fatty acid. The processing of fats by hydrogenation can convert some unsaturated fats into trans fat]]s. The presence of trans fats in various processed foods has received much attention. Omega-three and omega-six fatty acids The ω−3 fatty acids have received substantial attention. Among omega-3 fatty acids, neither long-chain nor short-chain forms were consistently associated with breast cancer risk. High levels of docosahexaenoic acid (DHA), however, the most abundant omega-3 polyunsaturated fatty acid in erythrocyte (red blood cell) membranes, were associated with a reduced risk of breast cancer. The DHA obtained through the consumption of polyunsaturated fatty acids is positively associated with cognitive and behavioral performance. In addition, DHA is vital for the grey matter structure of the human brain, as well as retinal stimulation and neurotransmission. Interesterification Some studies have investigated the health effects of interesterified (IE) fats, by comparing diets with IE and non-IE fats with the same overall fatty acid composition. Several experimental studies in humans found no statistical difference on fasting blood lipids between a diet with large amounts of IE fat, having 25-40% C16:0 or C18:0 on the 2-position, and a similar diet with non-IE fat, having only 3-9% C16:0 or C18:0 on the 2-position. A negative result was obtained also in a study that compared the effects on blood cholesterol levels of an IE fat product mimicking cocoa butter and the real non-IE product. A 2007 study funded by the Malaysian Palm Oil Board claimed that replacing natural palm oil by other interesterified or partially hydrogenated fats caused adverse health effects, such as higher LDL/HDL ratio and plasma glucose levels. However, these effects could be attributed to the higher percentage of saturated acids in the IE and partially hydrogenated fats, rather than to the IE process itself. Rancification Unsaturated fats undergo auto-oxidation, which involves replacement of a C-H bond with C-OH unit. The process requires oxygen (air) and is accelerated by the presence of traces of metals, which serve as catalysts. Doubly unsaturated fatty acids are particularly prone to this reaction. Vegetable oils resist this process to a small degree because they contain antioxidants, such as tocopherol. Fats and oils often are treated with chelating agents such as citric acid to remove the metal catalysts. Role in disease In the human body, high levels of triglycerides in the bloodstream have been linked to atherosclerosis, heart disease and stroke. However, the relative negative impact of raised levels of triglycerides compared to that of LDL:HDL ratios is as yet unknown. The risk can be partly accounted for by a strong inverse relationship between triglyceride level and HDL-cholesterol level. But the risk is also due to high triglyceride levels increasing the quantity of small, dense LDL particles. Guidelines The National Cholesterol Education Program has set guidelines for triglyceride levels: These levels are tested after fasting 8 to 12 hours. Triglyceride levels remain temporarily higher for a period after eating. The AHA recommends an optimal triglyceride level of 100mg/dL (1.1mmol/L) or lower to improve heart health. Reducing triglyceride levels Fat digestion and metabolism Fats are broken down in the healthy body to release their constituents, glycerol and fatty acids. Glycerol itself can be converted to glucose by the liver and so become a source of energy. Fats and other lipids are broken down in the body by enzymes called lipases produced in the pancreas. Many cell types can use either glucose or fatty acids as a source of energy for metabolism. In particular, heart and skeletal muscle prefer fatty acids. Despite long-standing assertions to the contrary, fatty acids can also be used as a source of fuel for brain cells through mitochondrial oxidation. See also Animal fat Monounsaturated fat Diet and heart disease Fatty acid synthesis Food composition data Western pattern diet Oil Lipid References Nutrients Macromolecules
Fat
[ "Physics", "Chemistry" ]
5,167
[ "Molecules", "Macromolecules", "Matter" ]
11,062
https://en.wikipedia.org/wiki/Friction
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. Types of friction include dry, fluid, lubricated, skin, and internal -- an incomplete list. The study of the processes involved is called tribology, and has a history of more than 2000 years. Friction can have dramatic consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Another important consequence of many types of friction can be wear, which may lead to performance degradation or damage to components. It is known that frictional energy losses account for about 20% of the total energy expenditure of the world. As briefly discussed later, there are many different contributors to the retarding force in friction, ranging from asperity deformation to the generation of charges and changes in local structure. Friction is not itself a fundamental force, it is a non-conservative force – work done against friction is path dependent. In the presence of friction, some mechanical energy is transformed to heat as well as the free energy of the structural changes and other types of dissipation, so mechanical energy is not conserved. The complexity of the interactions involved makes the calculation of friction from first principles difficult and it is often easier to use empirical methods for analysis and the development of theory. Types There are several types of friction: Dry friction is a force that opposes the relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction ("stiction") between non-moving surfaces, and kinetic friction between moving surfaces. With the exception of atomic or molecular friction, dry friction generally arises from the interaction of surface features, known as asperities (see Figure). Fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces. Skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation. History Many ancient authors including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 that "it is easier to further the motion of a moving body than to move a body at rest". The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology, but the laws documented in his notebooks were not published and remained unknown. These laws were rediscovered by Guillaume Amontons in 1699 and became known as Amonton's three laws of dry friction. Amontons presented the nature of friction in terms of surface irregularities and the force required to raise the weight pressing the surfaces together. This view was further elaborated by Bernard Forest de Bélidor and Leonhard Euler (1750), who derived the angle of repose of a weight on an inclined plane and first distinguished between static and kinetic friction. John Theophilus Desaguliers (1734) first recognized the role of adhesion in friction. Microscopic forces cause surfaces to stick together; he proposed that friction was the force necessary to tear the adhering surfaces apart. The understanding of friction was further developed by Charles-Augustin de Coulomb (1785). Coulomb investigated the influence of four main factors on friction: the nature of the materials in contact and their surface coatings; the extent of the surface area; the normal pressure (or load); and the length of time that the surfaces remained in contact (time of repose). Coulomb further considered the influence of sliding velocity, temperature and humidity, in order to decide between the different explanations on the nature of friction that had been proposed. The distinction between static and dynamic friction is made in Coulomb's friction law (see below), although this distinction was already drawn by Johann Andreas von Segner in 1758. The effect of the time of repose was explained by Pieter van Musschenbroek (1762) by considering the surfaces of fibrous materials, with fibers meshing together, which takes a finite time in which the friction increases. John Leslie (1766–1832) noted a weakness in the views of Amontons and Coulomb: If friction arises from a weight being drawn up the inclined plane of successive asperities, then why is it not balanced through descending the opposite slope? Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, which should on the whole have the same tendency to accelerate as to retard the motion. In Leslie's view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before. In the long course of the development of the law of conservation of energy and of the first law of thermodynamics, friction was recognised as a mode of conversion of mechanical work into heat. In 1798, Benjamin Thompson reported on cannon boring experiments. Arthur Jules Morin (1833) developed the concept of sliding versus rolling friction. In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on the friction of an electric current passing through a resistor, and on the friction of a paddle wheel rotating in a vat of water. Osborne Reynolds (1866) derived the equation of viscous flow. This completed the classic empirical model of friction (static, kinetic, and fluid) commonly used today in engineering. In 1877, Fleeming Jenkin and J. A. Ewing investigated the continuity between static and kinetic friction. In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications. He noted that for a rough body driven over a rough surface, the mechanical work done by the driver exceeds the mechanical work received by the surface. The lost work is accounted for by heat generated by friction. Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat, and the prime example of an irreversible thermodynamic process. The focus of research during the 20th century has been to understand the physical mechanisms behind friction. Frank Philip Bowden and David Tabor (1950) showed that, at a microscopic level, the actual area of contact between surfaces is a very small fraction of the apparent area. This actual area of contact, caused by asperities increases with pressure. The development of the atomic force microscope (ca. 1986) enabled scientists to study friction at the atomic scale, showing that, on that scale, dry friction is the product of the inter-surface shear stress and the contact area. These two discoveries explain Amonton's first law (below); the macroscopic proportionality between normal force and static frictional force between dry surfaces. Laws of dry friction The elementary property of sliding (kinetic) friction were discovered by experiment in the 15th to 18th centuries and were expressed as three empirical laws: Amontons' First Law: The force of friction is directly proportional to the applied load. Amontons' Second Law: The force of friction is independent of the apparent area of contact. Coulomb's Law of Friction: Kinetic friction is independent of the sliding velocity. Dry friction Dry friction resists relative lateral motion of two solid surfaces in contact. The two regimes of dry friction are 'static friction' ("stiction") between non-moving surfaces, and kinetic friction (sometimes called sliding friction or dynamic friction) between moving surfaces. Coulomb friction, named after Charles-Augustin de Coulomb, is an approximate model used to calculate the force of dry friction. It is governed by the model: where is the force of friction exerted by each surface on the other. It is parallel to the surface, in a direction opposite to the net applied force. is the coefficient of friction, which is an empirical property of the contacting materials, is the normal force exerted by each surface on the other, directed perpendicular (normal) to the surface. The Coulomb friction may take any value from zero up to , and the direction of the frictional force against a surface is opposite to the motion that surface would experience in the absence of friction. Thus, in the static case, the frictional force is exactly what it must be in order to prevent motion between the surfaces; it balances the net force tending to cause such motion. In this case, rather than providing an estimate of the actual frictional force, the Coulomb approximation provides a threshold value for this force, above which motion would commence. This maximum force is known as traction. The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a kinetic force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road. Normal force The normal force is defined as the net force compressing two parallel surfaces together, and its direction is perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, where . In this case, conditions of equilibrium tell us that the magnitude of the friction force is zero, . In fact, the friction force always satisfies , with equality reached only at a critical ramp angle (given by ) that is steep enough to initiate sliding. The friction coefficient is an empirical (experimentally measured) structural property that depends only on various aspects of the contacting materials, such as surface roughness. The coefficient of friction is not a function of mass or volume. For instance, a large aluminum block has the same coefficient of friction as a small aluminum block. However, the magnitude of the friction force itself depends on the normal force, and hence on the mass of the block. Depending on the situation, the calculation of the normal force might include forces other than gravity. If an object is on a and subjected to an external force tending to cause it to slide, then the normal force between the object and the surface is just , where is the block's weight and is the downward component of the external force. Prior to sliding, this friction force is , where is the horizontal component of the external force. Thus, in general. Sliding commences only after this frictional force reaches the value . Until then, friction is whatever it needs to be to provide equilibrium, so it can be treated as simply a reaction. If the object is on a such as an inclined plane, the normal force from gravity is smaller than , because less of the force of gravity is perpendicular to the face of the plane. The normal force and the frictional force are ultimately determined using vector analysis, usually via a free body diagram. In general, process for solving any statics problem with friction is to treat contacting surfaces tentatively as immovable so that the corresponding tangential reaction force between them can be calculated. If this frictional reaction force satisfies , then the tentative assumption was correct, and it is the actual frictional force. Otherwise, the friction force must be set equal to , and then the resulting force imbalance would then determine the acceleration associated with slipping. Coefficient of friction The coefficient of friction (COF), often symbolized by the Greek letter μ, is a dimensionless scalar value which equals the ratio of the force of friction between two bodies and the force pressing them together, either during or at the onset of slipping. The coefficient of friction depends on the materials used; for example, ice on steel has a low coefficient of friction, while rubber on pavement has a high coefficient of friction. Coefficients of friction range from near zero to greater than one. The coefficient of friction between two surfaces of similar metals is greater than that between two surfaces of different metals; for example, brass has a higher coefficient of friction when moved against brass, but less if moved against steel or aluminum. For surfaces at rest relative to each other, , where is the coefficient of static friction. This is usually larger than its kinetic counterpart. The coefficient of static friction exhibited by a pair of contacting surfaces depends upon the combined effects of material deformation characteristics and surface roughness, both of which have their origins in the chemical bonding between atoms in each of the bulk materials and between the material surfaces and any adsorbed material. The fractality of surfaces, a parameter describing the scaling behavior of surface asperities, is known to play an important role in determining the magnitude of the static friction. For surfaces in relative motion , where is the coefficient of kinetic friction. The Coulomb friction is equal to , and the frictional force on each surface is exerted in the direction opposite to its motion relative to the other surface. Arthur Morin introduced the term and demonstrated the utility of the coefficient of friction. The coefficient of friction is an empirical measurementit has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher effective values. Both static and kinetic coefficients of friction depend on the pair of surfaces in contact; for a given pair of surfaces, the coefficient of static friction is usually larger than that of kinetic friction; in some sets the two coefficients are equal, such as teflon-on-teflon. Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, an elusive property. Rubber in contact with other surfaces can yield friction coefficients from 1 to 2. Occasionally it is maintained that μ is always < 1, but this is not true. While in most relevant applications μ < 1, a value above 1 merely implies that the force required to slide an object along the surface is greater than the normal force of the surface on the object. For example, silicone rubber or acrylic rubber-coated surfaces have a coefficient of friction that can be substantially larger than 1. While it is often stated that the COF is a "material property", it is better categorized as a "system property". Unlike true material properties (such as conductivity, dielectric constant, yield strength), the COF for any two materials depends on system variables like temperature, velocity, atmosphere and also what are now popularly described as aging and deaging times; as well as on geometric properties of the interface between the materials, namely surface structure. For example, a copper pin sliding against a thick copper plate can have a COF that varies from 0.6 at low speeds (metal sliding against metal) to below 0.2 at high speeds when the copper surface begins to melt due to frictional heating. The latter speed, of course, does not determine the COF uniquely; if the pin diameter is increased so that the frictional heating is removed rapidly, the temperature drops, the pin remains solid and the COF rises to that of a 'low speed' test. In systems with significant non-uniform stress fields, because local slip occurs before the system slides, the macroscopic coefficient of static friction depends on the applied load, system size, or shape; Amontons' law is not satisfied macroscopically. Approximate coefficients of friction Under certain conditions some materials have very low friction coefficients. An example is (highly ordered pyrolytic) graphite which can have a friction coefficient below 0.01. This ultralow-friction regime is called superlubricity. Static friction Static friction is friction between two or more solid objects that are not moving relative to each other. For example, static friction can prevent an object from sliding down a sloped surface. The coefficient of static friction, typically denoted as μs, is usually higher than the coefficient of kinetic friction. Static friction is considered to arise as the result of surface roughness features across multiple length scales at solid surfaces. These features, known as asperities are present down to nano-scale dimensions and result in true solid to solid contact existing only at a limited number of points accounting for only a fraction of the apparent or nominal contact area. The linearity between applied load and true contact area, arising from asperity deformation, gives rise to the linearity between static frictional force and normal force, found for typical Amonton–Coulomb type friction. The static friction force must be overcome by an applied force before an object can move. The maximum possible friction force between two surfaces before sliding begins is the product of the coefficient of static friction and the normal force: . When there is no sliding occurring, the friction force can have any value from zero up to . Any force smaller than attempting to slide one surface over the other is opposed by a frictional force of equal magnitude and opposite direction. Any force larger than overcomes the force of static friction and causes sliding to occur. The instant sliding occurs, static friction is no longer applicable—the friction between the two surfaces is then called kinetic friction. However, an apparent static friction can be observed even in the case when the true static friction is zero. An example of static friction is the force that prevents a car wheel from slipping as it rolls on the ground. Even though the wheel is in motion, the patch of the tire in contact with the ground is stationary relative to the ground, so it is static rather than kinetic friction. Upon slipping, the wheel friction changes to kinetic friction. An anti-lock braking system operates on the principle of allowing a locked wheel to resume rotating so that the car maintains static friction. The maximum value of static friction, when motion is impending, is sometimes referred to as limiting friction, although this term is not used universally. Kinetic friction Kinetic friction, also known as dynamic friction or sliding friction, occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction for the same materials. However, Richard Feynman comments that "with dry metals it is very hard to show any difference." The friction force between two surfaces after sliding begins is the product of the coefficient of kinetic friction and the normal force: . This is responsible for the Coulomb damping of an oscillating or vibrating system. New models are beginning to show how kinetic friction can be greater than static friction. In many other cases roughness effects are dominant, for example in rubber to road friction. Surface roughness and contact area affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces. The origin of kinetic friction at nanoscale can be rationalized by an energy model. During sliding, a new surface forms at the back of a sliding true contact, and existing surface disappears at the front of it. Since all surfaces involve the thermodynamic surface energy, work must be spent in creating the new surface, and energy is released as heat in removing the surface. Thus, a force is required to move the back of the contact, and frictional heat is released at the front. Angle of friction For certain applications, it is more useful to define static friction in terms of the maximum angle before which one of the items will begin sliding. This is called the angle of friction or friction angle. It is defined as: and thus: where is the angle from horizontal and μs is the static coefficient of friction between the objects. This formula can also be used to calculate μs from empirical measurements of the friction angle. Friction at the atomic level Determining the forces required to move atoms past each other is a challenge in designing nanomachines. In 2008 scientists for the first time were able to move a single atom across a surface, and measure the forces required. Using ultrahigh vacuum and nearly zero temperature (5 K), a modified atomic force microscope was used to drag a cobalt atom, and a carbon monoxide molecule, across surfaces of copper and platinum. Limitations of the Coulomb model The Coulomb approximation follows from the assumptions that: surfaces are in atomically close contact only over a small fraction of their overall area; that this contact area is proportional to the normal force (until saturation, which takes place when all area is in atomic contact); and that the frictional force is proportional to the applied normal force, independently of the contact area. The Coulomb approximation is fundamentally an empirical construct. It is a rule-of-thumb describing the approximate outcome of an extremely complicated physical interaction. The strength of the approximation is its simplicity and versatility. Though the relationship between normal force and frictional force is not exactly linear (and so the frictional force is not entirely independent of the contact area of the surfaces), the Coulomb approximation is an adequate representation of friction for the analysis of many physical systems. When the surfaces are conjoined, Coulomb friction becomes a very poor approximation (for example, adhesive tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend strongly on the area of contact. Some drag racing tires are adhesive for this reason. However, despite the complexity of the fundamental physics behind friction, the relationships are accurate enough to be useful in many applications. "Negative" coefficient of friction , a single study has demonstrated the potential for an effectively negative coefficient of friction in the low-load regime, meaning that a decrease in normal force leads to an increase in friction. This contradicts everyday experience in which an increase in normal force leads to an increase in friction. This was reported in the journal Nature in October 2012 and involved the friction encountered by an atomic force microscope stylus when dragged across a graphene sheet in the presence of graphene-adsorbed oxygen. Numerical simulation of the Coulomb model Despite being a simplified model of friction, the Coulomb model is useful in many numerical simulation applications such as multibody systems and granular material. Even its most simple expression encapsulates the fundamental effects of sticking and sliding which are required in many applied cases, although specific algorithms have to be designed in order to efficiently numerically integrate mechanical systems with Coulomb friction and bilateral or unilateral contact. Some quite nonlinear effects, such as the so-called Painlevé paradoxes, may be encountered with Coulomb friction. Dry friction and instabilities Dry friction can induce several types of instabilities in mechanical systems which display a stable behaviour in the absence of friction. These instabilities may be caused by the decrease of the friction force with an increasing velocity of sliding, by material expansion due to heat generation during friction (the thermo-elastic instabilities), or by pure dynamic effects of sliding of two elastic materials (the Adams–Martins instabilities). The latter were originally discovered in 1995 by George G. Adams and João Arménio Correia Martins for smooth surfaces and were later found in periodic rough surfaces. In particular, friction-related dynamical instabilities are thought to be responsible for brake squeal and the 'song' of a glass harp, phenomena which involve stick and slip, modelled as a drop of friction coefficient with velocity. A practically important case is the self-oscillation of the strings of bowed instruments such as the violin, cello, hurdy-gurdy, erhu, etc. A connection between dry friction and flutter instability in a simple mechanical system has been discovered, watch the movie for more details. Frictional instabilities can lead to the formation of new self-organized patterns (or "secondary structures") at the sliding interface, such as in-situ formed tribofilms which are utilized for the reduction of friction and wear in so-called self-lubricating materials. Fluid friction Fluid friction occurs between fluid layers that are moving relative to each other. This internal resistance to flow is named viscosity. In everyday terms, the viscosity of a fluid is described as its "thickness". Thus, water is "thin", having a lower viscosity, while honey is "thick", having a higher viscosity. The less viscous the fluid, the greater its ease of deformation or movement. All real fluids (except superfluids) offer some resistance to shearing and therefore are viscous. For teaching and explanatory purposes it is helpful to use the concept of an inviscid fluid or an ideal fluid which offers no resistance to shearing and so is not viscous. Lubricated friction Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces. Lubrication is a technique employed to reduce wear of one or both surfaces in close proximity moving relative to each another by interposing a substance called a lubricant between the surfaces. In most cases the applied load is carried by pressure generated within the fluid due to the frictional viscous resistance to motion of the lubricating fluid between the surfaces. Adequate lubrication allows smooth continuous operation of equipment, with only mild wear, and without excessive stresses or seizures at bearings. When lubrication breaks down, metal or other components can rub destructively over each other, causing heat and possibly damage or failure. Skin friction Skin friction arises from the interaction between the fluid and the skin of the body, and is directly related to the area of the surface of the body that is in contact with the fluid. Skin friction follows the drag equation and rises with the square of the velocity. Skin friction is caused by viscous drag in the boundary layer around the object. There are two ways to decrease skin friction: the first is to shape the moving body so that smooth flow is possible, like an airfoil. The second method is to decrease the length and cross-section of the moving object as much as is practicable. Internal friction Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation. Plastic deformation in solids is an irreversible change in the internal molecular structure of an object. This change may be due to either (or both) an applied force or a change in temperature. The change of an object's shape is called strain. The force causing it is called stress. Elastic deformation in solids is reversible change in the internal molecular structure of an object. Stress does not necessarily cause permanent change. As deformation occurs, internal forces oppose the applied force. If the applied stress is not too large these opposing forces may completely resist the applied force, allowing the object to assume a new equilibrium state and to return to its original shape when the force is removed. This is known as elastic deformation or elasticity. Radiation friction As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward-acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief." Other types of friction Rolling resistance Rolling resistance is the force that resists the rolling of a wheel or other circular object along a surface caused by deformations in the object or surface. Generally the force of rolling resistance is less than that associated with kinetic friction. Typical values for the coefficient of rolling resistance are 0.001. One of the most common examples of rolling resistance is the movement of motor vehicle tires on a road, a process which generates heat and sound as by-products. Braking friction Any wheel equipped with a brake is capable of generating a large retarding force, usually for the purpose of slowing and stopping a vehicle or piece of rotating machinery. Braking friction differs from rolling friction because the coefficient of friction for rolling friction is small whereas the coefficient of friction for braking friction is designed to be large by choice of materials for brake pads. Triboelectric effect Rubbing two materials against each other can lead to charge transfer, either electrons or ions. The energy required for this contributes to the friction. In addition, sliding can cause a build-up of electrostatic charge, which can be hazardous if flammable gases or vapours are present. When the static build-up discharges, explosions can be caused by ignition of the flammable mixture. Belt friction Belt friction is a physical property observed from the forces acting on a belt wrapped around a pulley, when one end is being pulled. The resulting tension, which acts on both ends of the belt, can be modeled by the belt friction equation. In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a rig to know how many times the belt or rope must be wrapped around the pulley to prevent it from slipping. Mountain climbers and sailing crews demonstrate a standard knowledge of belt friction when accomplishing basic tasks. Reduction Devices Devices such as wheels, ball bearings, roller bearings, and air cushion or other types of fluid bearings can change sliding friction into a much smaller type of rolling friction. Many thermoplastic materials such as nylon, HDPE and PTFE are commonly used in low friction bearings. They are especially useful because the coefficient of friction falls with increasing imposed load. For improved wear resistance, very high molecular weight grades are usually specified for heavy duty or critical bearings. Lubricants A common way to reduce friction is by using a lubricant, such as oil, water, or grease, which is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives. Superlubricity, a recently discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels. A very small amount of frictional energy would still be dissipated. Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as a lubricant. Another way to reduce friction between two parts is to superimpose micro-scale vibration to one of the parts. This can be sinusoidal vibration as used in ultrasound-assisted cutting or vibration noise, known as dither. Energy of friction According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Mechanical energy is transformed into heat. A sliding hockey puck comes to rest because friction converts its kinetic energy into heat which raises the internal energy of the puck and the ice surface. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects come to rest spontaneously. When an object is pushed along a surface along a path C, the energy converted to heat is given by a line integral, in accordance with the definition of work where is the friction force, is the vector obtained by multiplying the magnitude of the normal force by a unit vector pointing against the object's motion, is the coefficient of kinetic friction, which is inside the integral because it may vary from location to location (e.g. if the material changes along the path), is the position of the object. Dissipation of energy by friction in a process is a classic example of thermodynamic irreversibility. Work of friction The work done by friction can translate into deformation, wear, and heat that can affect the contact surface properties (even the coefficient of friction between the surfaces). This can be beneficial as in polishing. The work of friction is used to mix and join materials such as in the process of friction welding. Excessive erosion or wear of mating sliding surfaces occurs when work due to frictional forces rise to unacceptable levels. Harder corrosion particles caught between mating surfaces in relative motion (fretting) exacerbates wear of frictional forces. As surfaces are worn by work due to friction, fit and surface finish of an object may degrade until it no longer functions properly. For example, bearing seizure or failure may result from excessive wear due to work of friction. In the reference frame of the interface between two surfaces, static friction does no work, because there is never displacement between the surfaces. In the same reference frame, kinetic friction is always in the direction opposite the motion, and does negative work. However, friction can do positive work in certain frames of reference. One can see this by placing a heavy box on a rug, then pulling on the rug quickly. In this case, the box slides backwards relative to the rug, but moves forward relative to the frame of reference in which the floor is stationary. Thus, the kinetic friction between the box and rug accelerates the box in the same direction that the box moves, doing positive work. When sliding takes place between two rough bodies in contact, the algebraic sum of the works done is different from zero, and the algebraic sum of the quantities of heat gained by the two bodies is equal to the quantity of work lost by friction, and the total quantity of heat gained is positive. In a natural thermodynamic process, the work done by an agency in the surroundings of a thermodynamic system or working body is greater than the work received by the body, because of friction. Thermodynamic work is measured by changes in a body's state variables, sometimes called work-like variables, other than temperature and entropy. Examples of work-like variables, which are ordinary macroscopic physical variables and which occur in conjugate pairs, are pressure – volume, and electric field – electric polarization. Temperature and entropy are a specifically thermodynamic conjugate pair of state variables. They can be affected microscopically at an atomic level, by mechanisms such as friction, thermal conduction, and radiation. The part of the work done by an agency in the surroundings that does not change the volume of the working body but is dissipated in friction, is called isochoric work. It is received as heat, by the working body and sometimes partly by a body in the surroundings. It is not counted as thermodynamic work received by the working body. Applications Friction is an important factor in many engineering disciplines. Transportation Automobile brakes inherently rely on friction, slowing a vehicle by converting its kinetic energy into heat. Incidentally, dispersing this large amount of heat safely is one technical challenge in designing brake systems. Disk brakes rely on friction between a disc and brake pads that are squeezed transversely against the rotating disc. In drum brakes, brake shoes or pads are pressed outwards against a rotating cylinder (brake drum) to create friction. Since braking discs can be more efficiently cooled than drums, disc brakes have better stopping performance. Rail adhesion refers to the grip wheels of a train have on the rails, see Frictional contact mechanics. Road slipperiness is an important design and safety factor for automobiles Split friction is a particularly dangerous condition arising due to varying friction on either side of a car. Road texture affects the interaction of tires and the driving surface. Measurement A tribometer is an instrument that measures friction on a surface. A profilograph is a device used to measure pavement surface roughness. Household usage Friction is used to heat and ignite matchsticks (friction between the head of a matchstick and the rubbing surface of the match box). Sticky pads are used to prevent object from slipping off smooth surfaces by effectively increasing the friction coefficient between the surface and the object. See also Contact dynamics Contact mechanics Factor of adhesion Friction Acoustics Frictionless plane Galling Lateral adhesion Non-smooth mechanics Normal contact stiffness Stick-slip phenomenon Transient friction loading Triboelectric effect Unilateral contact Friction torque References External links Coefficients of Friction – tables of coefficients, plus many links Measurement of friction power Physclips: Mechanics with animations and video clips from the University of New South Wales Values for Coefficient of Friction – CRC Handbook of Chemistry and Physics Coefficients of friction of various material pairs in atmosphere and vacuum. Classical mechanics Force Tribology
Friction
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
7,677
[ "Mechanical phenomena", "Tribology", "Physical phenomena", "Force", "Friction", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Surface science", "Materials science", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities", "Matter...
11,090
https://en.wikipedia.org/wiki/Forest
A forest is an ecosystem characterized by a dense community of trees. Hundreds of definitions of forest are used throughout the world, incorporating factors such as tree density, tree height, land use, legal standing, and ecological function. The United Nations' Food and Agriculture Organization (FAO) defines a forest as, "Land spanning more than 0.5 hectares with trees higher than 5 meters and a canopy cover of more than 10 percent, or trees able to reach these thresholds in situ. It does not include land that is predominantly under agricultural or urban use." Using this definition, Global Forest Resources Assessment 2020 found that forests covered , or approximately 31 percent of the world's land area in 2020. Forests are the largest terrestrial ecosystems of Earth by area, and are found around the globe. 45 percent of forest land is in the tropical latitudes. The next largest share of forests are found in subarctic climates, followed by temperate, and subtropical zones. Forests account for 75% of the gross primary production of the Earth's biosphere, and contain 80% of the Earth's plant biomass. Net primary production is estimated at 21.9 gigatonnes of biomass per year for tropical forests, 8.1 for temperate forests, and 2.6 for boreal forests. Forests form distinctly different biomes at different latitudes and elevations, and with different precipitation and evapotranspiration rates. These biomes include boreal forests in subarctic climates, tropical moist forests and tropical dry forests around the Equator, and temperate forests at the middle latitudes. Forests form in areas of the Earth with high rainfall, while drier conditions produce a transition to savanna. However, in areas with intermediate rainfall levels, forest transitions to savanna rapidly when the percentage of land that is covered by trees drops below 40 to 45 percent. Research conducted in the Amazon rainforest shows that trees can alter rainfall rates across a region, releasing water from their leaves in anticipation of seasonal rains to trigger the wet season early. Because of this, seasonal rainfall in the Amazon begins two to three months earlier than the climate would otherwise allow. Deforestation in the Amazon and anthropogenic climate change hold the potential to interfere with this process, causing the forest to pass a threshold where it transitions into savanna. Deforestation threatens many forest ecosystems. Deforestation occurs when humans remove trees from a forested area by cutting or burning, either to harvest timber or to make way for farming. Most deforestation today occurs in tropical forests. The vast majority of this deforestation is because of the production of four commodities: wood, beef, soy, and palm oil. Over the past 2,000 years, the area of land covered by forest in Europe has been reduced from 80% to 34%. Large areas of forest have also been cleared in China and in the eastern United States, in which only 0.1% of land was left undisturbed. Almost half of Earth's forest area (49 percent) is relatively intact, while 9 percent is found in fragments with little or no connectivity. Tropical rainforests and boreal coniferous forests are the least fragmented, whereas subtropical dry forests and temperate oceanic forests are among the most fragmented. Roughly 80 percent of the world's forest area is found in patches larger than . The remaining 20 percent is located in more than 34 million patches around the world – the vast majority less than in size. Human society and forests can affect one another positively or negatively. Forests provide ecosystem services to humans and serve as tourist attractions. Forests can also affect people's health. Human activities, including unsustainable use of forest resources, can negatively affect forest ecosystems. Definitions Although the word forest is commonly used, there is no universally recognised precise definition, with more than 800 definitions of forest used around the world. Although a forest is usually defined by the presence of trees, under many definitions an area completely lacking trees may still be considered a forest if it grew trees in the past, will grow trees in the future, or was legally designated as a forest regardless of vegetation type. There are three broad categories of definitions of forest in use: administrative, land use, and land cover. Administrative definitions are legal designations, and may not reflect the type of vegetation that grows upon the land; an area can be legally designated "forest" even if no trees grow on it. Land-use definitions are based on the primary purpose the land is used for. Under a land-use definition, any area used primarily for harvesting timber, including areas that have been cleared by harvesting, disease, fire, or for the construction of roads and infrastructure, are still defined as forests, even if they contain no trees. Land-cover definitions define forests based upon the density of trees, area of tree canopy cover, or area of the land occupied by the cross-section of tree trunks (basal area) meeting a particular threshold. This type of definition depends upon the presence of trees sufficient to meet the threshold, or at least of immature trees that are expected to meet the threshold once they mature. Under land-cover definitions, there is considerable variation on where the cutoff points are between a forest, woodland, and savanna. Under some definitions, to be considered a forest requires very high levels of tree canopy cover, from 60% to 100%, which excludes woodlands and savannas, which have a lower canopy cover. Other definitions consider savannas to be a type of forest, and include all areas with tree canopies over 10%. Some areas covered with trees are legally defined as agricultural areas, for example Norway spruce plantations, under Austrian forest law, when the trees are being grown as Christmas trees and are below a certain height. Etymology The word forest derives from the Old French forest (also forès), denoting "forest, vast expanse covered by trees"; forest was first introduced into English as the word denoting wild land set aside for hunting without necessarily having trees on the land. Possibly a borrowing, probably via Frankish or Old High German, of the Medieval Latin , denoting "open wood", Carolingian scribes first used foresta in the capitularies of Charlemagne, specifically to denote the royal hunting grounds of the king. The word was not endemic to the Romance languages, e.g., native words for forest in the Romance languages derived from the Latin silva, which denoted "forest" and "wood(land)" (cf. the English sylva and sylvan; the Italian, Spanish, and Portuguese selva; the Romanian silvă; the Old French selve). Cognates of forest in Romance languages—e.g., the Italian foresta, Spanish and Portuguese floresta, etc.—are all ultimately derivations of the French word. The precise origin of Medieval Latin is obscure. Some authorities claim the word derives from the Late Latin phrase forestam silvam, denoting "the outer wood"; others claim the word is a Latinisation of the Frankish *forhist, denoting "forest, wooded country", and was assimilated to forestam silvam, pursuant to the common practice of Frankish scribes. The Old High German forst denoting "forest"; Middle Low German vorst denoting "forest"; Old English fyrhþ denoting "forest, woodland, game preserve, hunting ground" (English frith); and Old Norse fýri, denoting "coniferous forest"; all of which derive from the Proto-Germanic *furhísa-, *furhíþija-, denoting "a fir-wood, coniferous forest", from the Proto-Indo-European *perkwu-, denoting "a coniferous or mountain forest, wooded height" all attest to the Frankish *forhist. Uses of forest in English to denote any uninhabited and unenclosed area are presently considered archaic. The Norman rulers of England introduced the word as a legal term, as seen in Latin texts such as Magna Carta, to denote uncultivated land that was legally designated for hunting by feudal nobility (see royal forest). These hunting forests did not necessarily contain any trees. Because that often included significant areas of woodland, "forest" eventually came to connote woodland in general, regardless of tree density. By the beginning of the fourteenth century, English texts used the word in all three of its senses: common, legal, and archaic. Other English words used to denote "an area with a high density of trees" are firth, frith, holt, weald, wold, wood, and woodland. Unlike forest, these are all derived from Old English and were not borrowed from another language. Some present classifications reserve woodland for denoting a locale with more open space between trees, and distinguish kinds of woodlands as open forests and closed forests, premised on their crown covers. Finally, sylva (plural sylvae or, less classically, sylvas) is a peculiar English spelling of the Latin silva, denoting a "woodland", and has precedent in English, including its plural forms. While its use as a synonym of forest, and as a Latinate word denoting a woodland, may be admitted; in a specific technical sense it is restricted to denoting the species of trees that comprise the woodlands of a region, as in its sense in the subject of silviculture. The resorting to sylva in English indicates more precisely the denotation that the use of forest intends. Evolutionary history The first known forests on Earth arose in the Middle Devonian (approximately 390 million years ago), with the evolution of cladoxylopsid plants like Calamophyton. Appeared in the Late Devonian, Archaeopteris was both tree-like and fern-like plant, growing to in height or more. It quickly spread throughout the world, from the equator to subpolar latitudes. It is the first species known to cast shade due to its fronds and forming soil from its roots. Archaeopteris was deciduous, dropping its fronds onto the forest floor, the shade, soil, and forest duff from the dropped fronds creating the early forest. The shed organic matter altered the freshwater environment, slowing its flow and providing food. This promoted freshwater fish. Ecology Forests account for 75% of the gross primary productivity of the Earth's biosphere, and contain 80% of the Earth's plant biomass. Biomass per unit area is high compared to other vegetation communities. Much of this biomass occurs below ground in the root systems and as partially decomposed plant detritus. The woody component of a forest contains lignin, which is relatively slow to decompose compared with other organic materials such as cellulose or carbohydrate. The world's forests contain about 606 gigatonnes of living biomass (above- and below-ground) and 59 gigatonnes of dead wood. The total biomass has decreased slightly since 1990, but biomass per unit area has increased. Forest ecosystems broadly differ based on climate; latitudes 10° north and south of the equator are mostly covered in tropical rainforest, and the latitudes between 53°N and 67°N have boreal forest. As a general rule, forests dominated by angiosperms (broadleaf forests) are more species-rich than those dominated by gymnosperms (conifer, montane, or needleleaf forests), although exceptions exist. The trees that form the principal structural and defining component of a forest may be of a great variety of species (as in tropical rainforests and temperate deciduous forests), or relatively few species over large areas (e.g., taiga and arid montane coniferous forests). The biodiversity of forests also encompasses shrubs, herbaceous plants, mosses, ferns, lichens, fungi, and a variety of animals. Trees rising up to in height add a vertical dimension to the area of land that can support plant and animal species, opening up numerous ecological niches for arboreal animal species, epiphytes, and various species that thrive under the regulated microclimate created under the canopy. Forests have intricate three-dimensional structures that increase in complexity with lower levels of disturbance and greater variety of tree species. The biodiversity of forests varies considerably according to factors such as forest type, geography, climate, and soils – in addition to human use. Most forest habitats in temperate regions support relatively few animal and plant species, and species that tend to have large geographical distributions, while the montane forests of Africa, South America, Southeast Asia, and lowland forests of Australia, coastal Brazil, the Caribbean islands, Central America, and insular Southeast Asia have many species with small geographical distributions. Areas with dense human populations and intense agricultural land use, such as Europe, parts of Bangladesh, China, India, and North America, are less intact in terms of their biodiversity. Northern Africa, southern Australia, coastal Brazil, Madagascar, and South Africa are also identified as areas with striking losses in biodiversity intactness. Components A forest consists of many components that can be broadly divided into two categories: biotic (living) and abiotic (non-living). The living parts include trees, shrubs, vines, grasses and other herbaceous (non-woody) plants, mosses, algae, fungi, insects, mammals, birds, reptiles, amphibians, and microorganisms living on the plants and animals and in the soil, connected by mycorrhizal networks. Layers The main layers of all forest types are the forest floor, the understory, and the canopy. The emergent layer, above the canopy, exists in tropical rainforests. Each layer has a different set of plants and animals, depending upon the availability of sunlight, moisture, and food. The Forest floor is covered in dead plant material such as fallen leaves and decomposing logs, which detritivores break down into new soil. The layer of decaying leaves that covers the soil is necessary for many insects to overwinter and for amphibians, birds, and other animals to shelter and forage for food. Leaf litter also keeps the soil moist, stops erosion, and protects roots against extreme heat and cold. The fungal mycelium that helps form the mycorrhizal network transmits nutrients from decaying material to trees and other plants. The forest floor supports a variety of plants, ferns, grasses, and tree seedlings, as well as animals such as ants, amphibians, spiders, and millipedes. Understory is made up of bushes, shrubs, and young trees that are adapted to living in the shade of the canopy. Canopy is formed by the mass of intertwined branches, twigs, and leaves of mature trees. The crowns of the dominant trees receive most of the sunlight. This is the most productive part of the trees, where maximum food is produced. The canopy forms a shady, protective "umbrella" over the rest of the forest. Emergent layer exists in a tropical rain forest and is composed of a few scattered trees that tower over the canopy. In botany and countries like Germany and Poland, a different classification of forest vegetation is often used: tree, shrub, herb, and moss layers (see stratification (vegetation)). Types Forests are classified differently and to different degrees of specificity. One such classification is in terms of the biomes in which they exist, combined with leaf longevity of the dominant species (whether they are evergreen or deciduous). Another distinction is whether the forests are composed predominantly of broadleaf trees, coniferous (needle-leaved) trees, or mixed. Boreal forests occupy the subarctic zone and are generally evergreen and coniferous. Temperate zones support both broadleaf deciduous forests (e.g., temperate deciduous forest) and evergreen coniferous forests (e.g., temperate coniferous forests and temperate rainforests). Warm temperate zones support broadleaf evergreen forests, including laurel forests. Tropical and subtropical forests include tropical and subtropical moist forests, tropical and subtropical dry forests, and tropical and subtropical coniferous forests. Forests are classified according to physiognomy based on their overall physical structure or developmental stage (e.g. old growth vs. second growth). Forests can also be classified more specifically based on the climate and the dominant tree species present, resulting in numerous different forest types (e.g., Ponderosa pine/Douglas fir forest). The number of trees in the world, according to a 2015 estimate, is 3 trillion, of which 1.4 trillion are in the tropics or sub-tropics, 0.6 trillion in the temperate zones, and 0.7 trillion in the coniferous boreal forests. The 2015 estimate is about eight times higher than previous estimates, and is based on tree densities measured on over 400,000 plots. It remains subject to a wide margin of error, not least because the samples are mainly from Europe and North America. Forests can also be classified according to the amount of human alteration. Old-growth forest contains mainly natural patterns of biodiversity in established seral patterns, and they contain mainly species native to the region and habitat. In contrast, secondary forest is forest regrowing following timber harvest and may contain species originally from other regions or habitats. Different global forest classification systems have been proposed, but none has gained universal acceptance. UNEP-WCMC's forest category classification system is a simplification of other, more complex systems (e.g. UNESCO's forest and woodland 'subformations'). This system divides the world's forests into 26 major types, which reflect climatic zones as well as the principal types of trees. These 26 major types can be reclassified into 6 broader categories: temperate needleleaf, temperate broadleaf and mixed, tropical moist, tropical dry, sparse trees and parkland, and forest plantations. Each category is described in a separate section below. Temperate needleleaf Temperate needleleaf forests mostly occupy the higher latitudes of the Northern Hemisphere, as well as some warm temperate areas, especially on nutrient-poor or otherwise unfavourable soils. These forests are composed entirely, or nearly so, of coniferous species (Coniferophyta). In the Northern Hemisphere, pines Pinus, spruces Picea, larches Larix, firs Abies, Douglas firs Pseudotsuga, and hemlocks Tsuga make up the canopy; but other taxa are also important. In the Southern Hemisphere, most coniferous trees (members of Araucariaceae and Podocarpaceae) occur mixed with broadleaf species, and are classed as broadleaf-and-mixed forests. Temperate broadleaf and mixed Temperate broadleaf and mixed forests include a substantial component of trees of the Anthophyta group. They are generally characteristic of the warmer temperate latitudes, but extend to cool temperate ones, particularly in the southern hemisphere. They include such forest types as the mixed deciduous forests of the United States and their counterparts in China and Japan; the broadleaf evergreen rainforests of Japan, Chile, and Tasmania; the sclerophyllous forests of Australia, central Chile, the Mediterranean, and California; and the southern beech Nothofagus forests of Chile and New Zealand. Tropical moist There are many different types of tropical moist forests, with lowland evergreen broad-leaf tropical rainforests: for example várzea and igapó forests and the terra firme forests of the Amazon Basin; the peat swamp forests; dipterocarp forests of Southeast Asia; and the high forests of the Congo Basin. Seasonal tropical forests, perhaps the best description for the colloquial term "jungle", typically range from the rainforest zone 10 degrees north or south of the equator, to the Tropic of Cancer and Tropic of Capricorn. Forests located on mountains are also included in this category, divided largely into upper and lower montane formations, on the basis of the variation of physiognomy corresponding to changes in altitude. Tropical dry Tropical dry forests are characteristic of areas in the tropics affected by seasonal drought. The seasonality of rainfall is usually reflected in the deciduousness of the forest canopy, with most trees being leafless for several months of the year. Under some conditions, such as less fertile soils or less predictable drought regimes, the proportion of evergreen species increases and the forests are characterised as "sclerophyllous". Thorn forest, a dense forest of low stature with a high frequency of thorny or spiny species, is found where drought is prolonged, and especially where grazing animals are plentiful. On very poor soils, and especially where fire or herbivory are recurrent phenomena, savannas develop. Sparse trees and savanna Sparse trees and savanna are forests with sparse tree-canopy cover. They occur principally in areas of transition from forested to non-forested landscapes. The two major zones in which these ecosystems occur are in the boreal region and in the seasonally dry tropics. At high latitudes, north of the main zone of boreal forestland, growing conditions are not adequate to maintain a continuously closed forest cover, so tree cover is both sparse and discontinuous. This vegetation is variously called open taiga, open lichen woodland, and forest tundra. A savanna is a mixed woodland–grassland ecosystem characterized by the trees being sufficiently widely spaced so that the canopy does not close. The open canopy allows sufficient light to reach the ground to support an unbroken herbaceous layer that consists primarily of grasses. Savannas maintain an open canopy despite a high tree density. Plantations Forest plantations are generally intended for the production of timber and pulpwood. Commonly mono-specific, planted with even spacing between the trees, and intensively managed, these forests are generally important as habitat for native biodiversity. Some are managed in ways that enhance their biodiversity protection functions and can provide ecosystem services such as nutrient capital maintenance, watershed and soil structure protection and carbon storage. Area The annual net loss of forest area has decreased since 1990, but the world is not on track to meet the target of the United Nations Strategic Plan for Forests to increase forest area by 3 percent by 2030. While deforestation is taking place in some areas, new forests are being established through natural expansion or deliberate efforts in other areas. As a result, the net loss of forest area is less than the rate of deforestation; and it, too, is decreasing: from per year in the 1990s to per year during 2010–2020. In absolute terms, the global forest area decreased by between 1990 and 2020, which is an area about the size of Libya. Societal significance Ecosystem services Forests provide a diversity of ecosystem services including: Converting carbon dioxide into oxygen and biomass. A full-grown tree produces about of net oxygen per year. Acting as a carbon sink. Therefore, they are necessary to mitigate climate change. Aiding in regulating climate. For example, research from 2017 shows that forests induce rainfall. If the forest is cut, it can lead to drought, and in the tropics to occupational heat stress of outdoor workers. Purifying water. Mitigating natural hazards such as floods. Serving as a genetic reserve. Serving as a source of lumber and as recreational areas. Serving as a source of woodlands and trees for millions of people dependent almost entirely on forests for subsistence for their essential fuelwood, food, and fodder needs. The main ecosystem services can be summarized in the next table: Some researchers state that forests do not only provide benefits, but can in certain cases also incur costs to humans. Forests may impose an economic burden, diminish the enjoyment of natural areas, reduce the food-producing capacity of grazing land and cultivated land, reduce biodiversity, reduce available water for humans and wildlife, harbour dangerous or destructive wildlife, and act as reservoirs of human and livestock disease. An important consideration regarding carbon sequestration is that forests can turn from a carbon sink to a carbon source if plant diversity, density or forest area decreases, as has been observed in different tropical forests The typical tropical forest may become a carbon source by the 2060s. An assessment of European forests found early signs of carbon sink saturation, after decades of increasing strength. The Intergovernmental Panel on Climate Change (IPCC) concluded that a combination of measures aimed at increasing forest carbon stocks, andsustainable timber offtake will generate the largest carbon sequestration benefit. Forest-dependent people The term forest-dependent people is used to describe any of a wide variety of livelihoods that are dependent on access to forests, products harvested from forests, or ecosystem services provided by forests, including those of Indigenous peoples dependent on forests. In India, approximately 22 percent of the population belongs to forest-dependent communities, which live in close proximity to forests and practice agroforestry as a principal part of their livelihood. People of Ghana who rely on timber and bushmeat harvested from forests and Indigenous peoples of the Amazon rainforest are also examples of forest-dependent people. Though forest-dependence by more common definitions is statistically associated with poverty and rural livelihoods, elements of forest-dependence exist in communities with a wide range of characteristics. Generally, richer households derive more cash value from forest resources, whereas among poorer households, forest resources are more important for home consumption and increase community resilience. Indigenous peoples Forests are fundamental to the culture and livelihood of indigenous people groups that live in and depend on forests, many of which have been removed from and denied access to the lands on which they lived as part of global colonialism. Indigenous lands contain 36% or more of intact forest worldwide, host more biodiversity, and experience less deforestation. Indigenous activists have argued that degradation of forests and indigenous peoples' marginalization and land dispossession are interconnected. Other concerns among indigenous peoples include lack of Indigenous involvement in forest management and loss of knowledge related for the forest ecosystem. Since 2002, the amount of land that is legally owned by or designated for indigenous peoples has broadly increased, but land acquisition in lower-income countries by multinational corporations, often with little or no consultation of indigenous peoples, has also increased. Research in the Amazon rainforest suggests that indigenous methods of agroforestry form reservoirs of biodiversity. In the U.S. state of Wisconsin, forests managed by indigenous people have more plant diversity, fewer invasive species, higher tree regeneration rates, and higher volume of trees. Management Forest management has changed considerably over the last few centuries, with rapid changes from the 1980s onward, culminating in a practice now referred to as sustainable forest management. Forest ecologists concentrate on forest patterns and processes, usually with the aim of elucidating cause-and-effect relationships. Foresters who practice sustainable forest management focus on the integration of ecological, social, and economic values, often in consultation with local communities and other stakeholders. Humans have generally decreased the amount of forest worldwide. Anthropogenic factors that can affect forests include logging, urban sprawl, human-caused forest fires, acid rain, invasive species, and the slash and burn practices of swidden agriculture or shifting cultivation. The loss and re-growth of forests lead to a distinction between two broad types of forest: primary or old-growth forest and secondary forest. There are also many natural factors that can cause changes in forests over time, including forest fires, insects, diseases, weather, competition between species, etc. In 1997, the World Resources Institute recorded that only 20% of the world's original forests remained in large intact tracts of undisturbed forest. More than 75% of these intact forests lie in three countries: the boreal forests of Russia and Canada, and the rainforest of Brazil. According to Food and Agriculture Organization's (FAO) Global Forest Resources Assessment 2020, an estimated of forest have been lost worldwide through deforestation since 1990, but the rate of forest loss has declined substantially. In the most recent five-year period (2015–2020), the annual rate of deforestation was estimated at , down from annually in 2010–2015. The forest transition The transition of a region from forest loss to net gain in forested land is referred to as the forest transition. This change occurs through a few main pathways, including increase in commercial tree plantations, adoption of agroforestry techniques by small farmers, or spontaneous regeneration when former agricultural land is abandoned. It can be motivated by the economic benefits of forests, the ecosystem services forests provide, or cultural changes where people increasingly appreciate forests for their spiritual, aesthetic, or otherwise intrinsic value. According to the Special Report on Global Warming of 1.5 °C of the Intergovernmental Panel on Climate Change, to avoid temperature rise by more than 1.5 degrees above pre-industrial levels, there will need to be an increase in global forest cover equal to the land area of Canada () by 2050. China instituted a ban on logging, beginning in 1998, due to the erosion and flooding that it caused. In addition, ambitious tree-planting programmes in countries such as China, India, the United States, and Vietnam – combined with natural expansion of forests in some regions – have added more than of new forests annually. As a result, the net loss of forest area was reduced to per year between 2000 and 2010, down from annually in the 1990s. In 2015, a study for Nature Climate Change showed that the trend has recently been reversed, leading to an "overall gain" in global biomass and forests. This gain is due especially to reforestation in China and Russia. New forests are not equivalent to old growth forests in terms of species diversity, resilience, and carbon capture. On 7 September 2015, the FAO released a new study stating that over the last 25 years the global deforestation rate has decreased by 50% due to improved management of forests and greater government protection. There is an estimated of forest in protected areas worldwide. Of the six major world regions, South America has the highest share of forests in protected areas, at 31 percent. The area of such areas globally has increased by since 1990, but the rate of annual increase slowed in 2010–2020. Smaller areas of woodland in cities may be managed as urban forestry, sometimes within public parks. These are often created for human benefits; Attention Restoration Theory argues that spending time in nature reduces stress and improves health, while forest schools and kindergartens help young people to develop social as well as scientific skills in forests. These typically need to be close to where the children live. Canada Canada has about of forest land. More than 90% of forest land is publicly owned and about 50% of the total forest area is allocated for harvesting. These allocated areas are managed using the principles of sustainable forest management, which include extensive consultation with local stakeholders. About eight percent of Canada's forest is legally protected from resource development. Much more forest land—about 40 percent of the total forest land base—is subject to varying degrees of protection through processes such as integrated land use planning or defined management areas, such as certified forests. By December 2006, over of forest land in Canada (about half the global total) had been certified as being sustainably managed. Clearcutting, first used in the latter half of the 20th century, is less expensive, but devastating to the environment; and companies are required by law to ensure that harvested areas are adequately regenerated. Most Canadian provinces have regulations limiting the size of new clear-cuts, although some older ones grew to over several years. The Canadian Forest Service is the government department which looks after Forests in Canada. Latvia Latvia has about of forest land, which equates to about 50.5% of Latvia's total area of of forest land (46% of total forest land) is publicly owned and of forest land (54% of the total) is in private hands. Latvia's forests have been steadily increasing over the years, which is in contrast to many other nations, mostly due to the forestation of land not used for agriculture. In 1935, there were only of forest; today this has increased by more than 150%. Birch is the most common tree at 28.2%, followed by pine (26.9%), spruce (18.3%), grey alder (9.7%), aspen (8.0%), black alder (5.7%), oak/ash (1.2%), with other hardwood trees making up the rest (2.0%). United States In the United States, most forests have historically been affected by humans to some degree, though in recent years improved forestry practices have helped regulate or moderate large-scale impacts. The United States Forest Service estimated a net loss of about between 1997 and 2020; this estimate includes conversion of forest land to other uses, including urban and suburban development, as well as afforestation and natural reversion of abandoned crop and pasture land to forest. In many areas of the United States, the area of forest is stable or increasing, particularly in many northern states. The opposite problem from flooding has plagued national forests, with loggers complaining that a lack of thinning and proper forest management has resulted in large forest fires. See also Sources References External links Forests in danger Intact Forests with maps and reports (archived 8 September 2015) Global Forest Resources Assessment 2005 by the Food and Agriculture Organization CoolForests.org – Conservation Cools the Planet (archived 24 January 2008) Forest area is land under natural or planted stands of trees of at least 5 meters in situ, whether productive or not, and excludes tree stands in agricultural production systems Forest area (sq. km) data from the World Bank's World Development Indicators, made available by Google Habitats Trees Ecosystems
Forest
[ "Biology" ]
6,866
[ "Forests", "Symbiosis", "Ecosystems" ]
11,100
https://en.wikipedia.org/wiki/Kite
A kite is a tethered heavier-than-air or lighter-than-air craft with wing surfaces that react against the air to create lift and drag forces. A kite consists of wings, tethers and anchors. Kites often have a bridle and tail to guide the face of the kite so the wind can lift it. Some kite designs do not need a bridle; box kites can have a single attachment point. A kite may have fixed or moving anchors that can balance the kite. The name is derived from the kite, the hovering bird of prey. There are several shapes of kites. The lift that sustains the kite in flight is generated when air moves around the kite's surface, producing low pressure above and high pressure below the wings. The interaction with the wind also generates horizontal drag along the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of one or more of the lines or tethers to which the kite is attached. The anchor point of the kite line may be static or moving (e.g., the towing of a kite by a running person, boat, free-falling anchors as in paragliders and fugitive parakites or vehicle). The same principles of fluid flow apply in liquids, so kites can be used in underwater currents. Paravanes and otter boards operate underwater on an analogous principle. Man-lifting kites were made for reconnaissance, entertainment and during development of the first practical aircraft, the biplane. Kites have a long and varied history and many different types are flown individually and at festivals worldwide. Kites may be flown for recreation, art or other practical uses. Sport kites can be flown in aerial ballet, sometimes as part of a competition. Power kites are multi-line steerable kites designed to generate large forces which can be used to power activities such as kite surfing, kite landboarding, kite buggying and snow kiting. History Kites were invented in Asia, though their exact origin can only be speculated. The oldest depiction of a kite is from a mesolithic period cave painting in Muna island, southeast Sulawesi, Indonesia, which has been dated from 9500–9000 years B.C. It depicts a type of kite called kaghati, which are still used by modern Muna people. The kite is made from kolope (forest tuber) leaf for the mainsail, bamboo skin as the frame, and twisted forest pineapple fiber as rope, though modern kites use string. In China, the kite has been claimed as the invention of the 5th-century BC Chinese philosophers Mozi (also Mo Di, or Mo Ti) and Lu Ban (also Gongshu Ban, or Kungshu Phan). Materials ideal for kite building were readily available including silk fabric for sail material; fine, high-tensile-strength silk for flying line; and resilient bamboo for a strong, lightweight framework. By 549 AD, paper kites were certainly being flown, as it was recorded that in that year a paper kite was used as a message for a rescue mission. Ancient and medieval Chinese sources describe kites being used for measuring distances, testing the wind, lifting men, signaling, and communication for military operations. The earliest known Chinese kites were flat (not bowed) and often rectangular. Later, tailless kites incorporated a stabilizing bowline. Kites were decorated with mythological motifs and legendary figures; some were fitted with strings and whistles to make musical sounds while flying. After its introduction into India, the kite further evolved into the fighter kite, known as the patang in India, where thousands are flown every year on festivals such as Makar Sankranti. Kites were known throughout Polynesia, as far as New Zealand, with the assumption being that the knowledge diffused from China along with the people. Anthropomorphic kites made from cloth and wood were used in religious ceremonies to send prayers to the gods. Polynesian kite traditions are used by anthropologists to get an idea of early "primitive" Asian traditions that are believed to have at one time existed in Asia. Kites were late to arrive in Europe, although windsock-like banners were known and used by the Romans. Stories of kites were first brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries. Konrad Kyeser described dragon kites in Bellifortis about 1400 AD. Although kites were initially regarded as mere curiosities, by the 18th and 19th centuries they were being used as vehicles for scientific research. In 1752, Benjamin Franklin published an account of a kite experiment to prove that lightning was caused by electricity. Kites were also instrumental in the research of the Wright brothers, and others, as they developed the first airplane in the late 1800s. Several different designs of man-lifting kites were developed. The period from 1860 to about 1910 became the European "golden age of kiting". In the 20th century, many new kite designs are developed. These included Eddy's tailless diamond, the tetrahedral kite, the Rogallo wing, the sled kite, the parafoil, and power kites. Kites were used for scientific purposes, especially in meteorology, aeronautics, wireless communications and photography. The Rogallo wing was adapted for stunt kites and hang gliding and the parafoil was adapted for parachuting and paragliding. The rapid development of mechanically powered aircraft diminished interest in kites. World War II saw a limited use of kites for military purposes (survival radio, Focke Achgelis Fa 330, military radio antenna kites). Kites are now mostly used for recreation. Lightweight synthetic materials (ripstop nylon, plastic film, carbon fiber tube and rod) are used for kite making. Synthetic rope and cord (nylon, polyethylene, kevlar and dyneema) are used as bridle and kite line. Materials Designs often emulate flying insects, birds, and other beasts, both real and mythical. The finest Chinese kites are made from split bamboo (usually golden bamboo), covered with silk, and hand painted. On larger kites, clever hinges and latches allow the kite to be disassembled and compactly folded for storage or transport. Cheaper mass-produced kites are often made from printed polyester rather than silk. Tails are used for some single-line kite designs to keep the kite's nose pointing into the wind. Spinners and spinsocks can be attached to the flying line for visual effect. There are rotating wind socks which spin like a turbine. On large display kites these tails, spinners and spinsocks can be long or more. Modern aerobatic kites use two or four lines to allow fine control of the kite's angle to the wind. Traction kites may have an additional line to de-power the kite and quick-release mechanisms to disengage flyer and kite in an emergency. Practical uses Kites have been used for human flight, military applications, science and meteorology, photography, lifting radio antennas, generating power, aerodynamics experiments, and much more. Military applications Kites have been used for military purposes in the past, such as signaling, delivery of ammunition, and for observation, both by lifting an observer above the field of battle and by using kite aerial photography. Kites were first used in warfare by the Chinese. During the Song dynasty the Fire Crow, a kite carrying incendiary powder, a fuse, and a burning stick of incense was developed as a weapon. According to Samguk Sagi, in 647 Kim Yu-sin, a Korean general of Silla rallied his troops to defeat rebels by using flaming kites which also frightened the enemy. Russian chronicles mention Prince Oleg of Novgorod use of kites during the siege of Constantinople in 906: "and he crafted horses and men of paper, armed and gilded, and lifted them into the air over the city; the Greeks saw them and feared them". Walter de Milemete's 1326 De nobilitatibus, sapientiis, et prudentiis regum treatise depicts a group of knights flying kite laden with a black-powder filled firebomb over the wall of city. Kites were also used by Admiral Yi of the Joseon Dynasty (13921910) of Korea. During the Japanese invasions of Korea (1592–1598), Admiral Yi commanded his navy using kites. His kites had specific markings directing his fleet to perform various orders. In the modern era the British Army used kites to haul human lookouts into the air for observation purposes, using the kites developed by Samuel Franklin Cody. Barrage kites were used to protect shipping during the Second World War. Kites were also used for anti-aircraft target practice. Kites and kytoons were used for lofting communications antenna. Submarines lofted observers in rotary kites. Palestinians from the Gaza Strip have flown firebomb kites over the Israel–Gaza barrier, setting fires on the Israeli side of the border, hundreds of dunams of Israeli crop fields were burned by firebomb kites launched from Gaza, with an estimated economic loss of several millions of shekels. Science and meteorology Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with very large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting. Francis Ronalds and William Radcliffe Birt described a very stable kite at Kew Observatory as early as 1847 that was trialled for the purpose of supporting self-registering meteorological instruments at height. Radio aerials and light beacons Kites can be used for radio purposes, by kites carrying antennas for MF, LF or VLF-transmitters. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require a lot of wind, which may be not always possible with heavy equipment and a ground conductor. It must be taken into account during experiments, that a conductor carried by a kite can lead to high voltage toward ground, which can endanger people and equipment, if suitable precautions (grounding through resistors or a parallel resonant circuit tuned to transmission frequency) are not taken. Kites can be used to carry light effects such as lightsticks or battery powered lights. Kite traction Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades several kite sailing sports have become popular, such as kite buggying, kite land boarding, kite boating and kite surfing. Snow kiting has also become popular in recent years. Kite sailing opens several possibilities not available in traditional sailing: Wind speeds are greater at higher altitudes Kites may be maneuvered dynamically which increases the force available dramatically There is no need for mechanical structures to withstand bending forces; vehicles or hulls can be very light or dispensed with all together Electricity generation Computer-controlled kites can serve as a method of electricity generation when windmills are impractical. Several companies have introduced self-contained crates and shipping containers that provide an alternative to gas-powered generators for remote locations. Such systems use a combination of autonomous, self-launching kites for generation and batteries to store excess power for when winds are low or when otherwise draw exceeds supply. Some designs are tethered to long lines to reach high altitude winds which are always present, even when ground level winds are unavailable or insufficient. Underwater kites Underwater kites are now being developed to harvest renewable power from the flow of water. A kite was used in minesweeping operations from the First World War: this was a foil "attached to a sweep-wire submerging it to the requisite depth when it is towed over a minefield" (OED, 2021). See also paravane. Cultural uses Kite festivals are a popular form of entertainment throughout the world. They include large local events, traditional festivals which have been held for hundreds of years and major international festivals which bring in kite flyers from other countries to display their unique art kites and demonstrate the latest technical kites. Many countries have kite museums. These museums may have a focus on historical kites, preserving the country's kite traditions. Asia Kite flying is popular in many Asian countries, where it often takes the form of "kite fighting", in which participants try to snag each other's kites or cut other kites down. Fighter kites are usually small, flattened diamond-shaped kites made of paper and bamboo. Tails are not used on fighter kites so that agility and maneuverability are not compromised.In Afghanistan, kite flying is a popular game, and is known in Dari as Gudiparan Bazi. Some kite fighters pass their strings through a mixture of ground glass powder and glue, which is legal. The resulting strings are very abrasive and can sever the competitor's strings more easily. The abrasive strings can also injure people. During the Taliban rule in Afghanistan, kite flying was banned, among various other recreations. In Pakistan, kite flying is often known as Gudi-Bazi or Patang-bazi. Although kite flying is a popular ritual for the celebration of spring festival known as Jashn-e-Baharaan (lit. Spring Festival) or Basant, kites are flown throughout the year. Kite fighting is a very popular pastime all around Pakistan, but mostly in urban centers across the country (especially Lahore). The kite fights are at their highest during the spring celebrations and the fighters enjoy competing with rivals to cut-loose the string of the others kite, popularly known as "Paecha". During the spring festival, kite flying competitions are held across the country and the skies are colored with kites. When a competitor succeeds in cutting another's kite loose, shouts of 'wo kata' ring through the air. Cut kites are reclaimed by chasing after them. This is a popular ritual, especially among the country's youth, and is depicted in the 2007 film The Kite Runner (although that story is based in neighboring Afghanistan). Kites and strings are a big business in the country and several different types of string are used, including glass-coated, metal, and tandi. Kite flying was banned in Punjab, India due to more than one motorcyclist death caused by glass-coated or metal kite strings. Kup, Patang, Guda, and Nakhlaoo are some of the popular kite brands; they vary in balance, weight and speed. In Indonesia kites are flown as both sport and recreation. One of the most popular kite variants is from Bali. Balinese kites are unique and they have different designs and forms; birds, butterflies, dragons, ships, etc. In Vietnam, kites are flown without tails. Instead small flutes are attached allowing the wind to "hum" a musical tune. There are other forms of sound-making kites. In Bali, large bows are attached to the front of the kites to make a deep throbbing vibration, and in Malaysia, a row of gourds with sound-slots are used to create a whistle as the kite flies. Malaysia is also home to the Kite Museum in Malacca. Kite are also popular in Nepal, especially in hilly areas and among the Pahadi and Newar communities, although people also fly kites in Terai areas. Unlike India, people in Nepal fly kites in August – September period and is more popular in time of Dashain. Kites are very popular in India, with the states of Gujarat, Bihar, Uttar Pradesh, Rajasthan, Haryana and Punjab notable for their kite fighting festivals. Highly maneuverable single-string paper and bamboo kites are flown from the rooftops while using line friction in an attempt to cut each other's kite lines, either by letting the cutting line loose at high speed or by pulling the line in a fast and repeated manner. During the Indian spring festival of Makar Sankranti, near the middle of January, millions of people fly kites all over northern India. Kite flying in Hyderabad starts a month before this, but kite flying/fighting is an important part of other celebrations, including Republic Day, Independence Day, Raksha Bandhan, Viswakarma Puja day in late September and Janmashtami. An international kite festival is held every year before Uttarayan for three days in Vadodara, Surat and Ahmedabad. Kites have been flown in China since ancient times. Weifang is home to the largest kite museum in the world. It also hosts an annual international kite festival on the large salt flats south of the city. There are several kite museums in Japan, UK, Malaysia, Indonesia, Taiwan, Thailand and the USA. In the pre-modern period, Malays in Singapore used kites for fishing. In Japan, kite flying is traditionally a children's play in New Year holidays and in the Boys' Festival in May. In some areas, there is a tradition to celebrate a new boy baby with a new kite (祝い凧). There are many kite festivals throughout Japan. The most famous one is "Yōkaichi Giant Kite Festival" in Higashiōmi, Shiga, which started in 1841. The largest kite ever built in the festival is wide by high and weighs . In the Hamamatsu Kite Festival in Hamamatsu, Shizuoka, more than 100 kites are flown in the sky over the Nakatajima Sand Dunes, one of the three largest sand dunes in Japan, which overlooks the Enshunada Sea. Parents who have a new baby prepare a new kite with their baby's name and fly it in the festival. These kites are traditional ones made from bamboo and paper. Europe In Greece and Cyprus, flying kites is a tradition for Clean Monday, the first day of Lent. In the British Overseas Territory of Bermuda, traditional Bermuda kites are made and flown at Easter, to symbolise Christ's ascent. In Fuerteventura a kite festival is usually held on the weekend nearest to 8 November lasting for 3 days. Polynesia Polynesian traditional kites are sometimes used at ceremonies and variants of traditional kites for amusement. Older pieces are kept in museums. These are treasured by the people of Polynesia. South America In Brazil, flying a kite is a very popular leisure activity for children, teenagers and even young adults. Mostly these are boys, and it is overwhelmingly kite fighting a game whose goal is to maneuver their own kites to cut the other persons' kites' strings during flight, and followed by kite running where participants race through the streets to take the free-drifting kites. As in other countries with similar traditions, injuries are common and motorcyclists in particular need to take precautions. In Chile, kites are very popular, especially during Independence Day festivities (September 18). In Peru, kites are also very popular. There are kite festivals in parks and beaches mostly on August. In Colombia, kites can be seen flown in parks and recreation areas during August which is calles as windy. It is during this month that most people, especially the young ones would fly kites. In Guyana, kites are flown at Easter, an activity in which all ethnic and religious groups participate. Kites are generally not flown at any other time of year. Kites start appearing in the sky in the weeks leading up to Easter and school children are taken to parks for the activity. It all culminates in a massive airborne celebration on Easter Monday especially in Georgetown, the capital, and other coastal areas. The history of the practice is not entirely clear but given that Easter is a Christian festival, it is said that kite flying is symbolic of the Risen Lord. Moore describes the phenomenon in the 19th century as follows: The exact origins of the practice of kite flying (exclusively) at Easter are unclear. Bridget Brereton and Kevin Yelvington speculate that kite flying was introduced by Chinese indentured immigrants to the then colony of British Guiana in the mid 19th century. The author of an article in the Guyana Chronicle newspaper of May 6, 2007 is more certain: World records There are many world records involving kites. The world's largest kites are inflatable single-line kites. The world record for the largest kite flown for at least 20 minutes is "The Flag of Kuwait". The world record for most kites flown simultaneously was achieved in 2011 when 12,350 kites were flown by children on Al-Waha beach in Gaza Strip. The single-kite altitude record is held by a triangular-box delta kite. On 23 September 2014 a team led by Robert Moore, flew a kite to above ground level. The record altitude was reached after eight series of attempts over a ten-year period from a remote location in western New South Wales, Australia. The tall and wide Dunton-Taylor delta kite's flight was controlled by a winch system using of ultra high strength Dyneema line. The flight took about eight hours from ground and return. The height was measured with on-board GPS telemetry transmitting positional data in real time to a ground-based computer and also back-up GPS data loggers for later analysis. In popular culture The Kite Runner, a 2005 novel by Khaled Hosseini dramatizes the role of kite fighting in pre-war Kabul. The Peanuts cartoon character Charlie Brown was often depicted having flown his kite into a tree as a metaphor for life's adversities. "Let's Go Fly a Kite" is a song from the Mary Poppins film and musical. In the Disney animated film Mulan, kites are flown in the parade. In the film Shooter, a kite is used to show the wind direction and wind velocity. "Kite" is a 1978 song celebrating kite flying and appears on Kate Bush's first album, The Kick Inside. General safety issues There are safety issues involved in kite-flying. Kite lines can strike and tangle on electrical power lines, causing power blackouts and running the risk of electrocuting the kite flier. Wet kite lines or wire can act as a conductor for static electricity and lightning when the weather is stormy. Kites with large surface area or powerful lift can lift kite fliers off the ground or drag them into other objects. In urban areas there is usually a ceiling on how high a kite can be flown, to prevent the kite and line infringing on the airspace of helicopters and light aircraft. It is also possible for fighter kites to kill people, as happened in India when three spectators were killed in separate incidents during Independence Day, August, 2016—precipitating a ban on certain types of enhanced line. The government of Egypt banned kite-flying in July 2020, seizing 369 kites in Cairo and 99 in Alexandria, citing both safety and national security concerns. Designs Bermuda kite Bowed kite, e.g. Rokkaku Cellular or box kite Chapi-chapi Delta kite Foil, parafoil or bow kite Leading edge inflatable kite Malay kite see also wau bulan (Moon kite) Tetrahedral kite Sled kite Gallery Types Fighter kite Indoor kite Inflatable single-line kite Kytoon - a hybrid tethered craft comprising both a lighter-than-air balloon as well as a kite lifting surface Man-lifting kite Rogallo parawing kite Stunt (sport) kite Water kite See also Airborne wind turbine, concept for a wind generator flown as kite Captive helicopter Captive plane High altitude wind power Kite aerial photography Kite buggying Kite fishing Kite ice skating Kite landboarding Kite shape Kiteboating Kitelife, an American magazine devoted to kites Kitesurfing Kite rig List of kite festivals Sea Tails, video installation Solar balloon, a solar-heated hot air balloon that can be flown like a kite, but on windless days. Uttarayan, the kite flying festival of western India Weifang International Kite Festival References External links The earliest depiction of kite flying in European literature in a panorama of Ternate (Moluccas) 1600. Mathematics and aeronautical principles of kites. Kitecraft and Kite Tournaments (1914)—A free public domain e-book Eyes on Brazil 1st-millennium BC introductions Aircraft configurations Desi culture Physical activity and dexterity toys Traditional toys Indonesian inventions
Kite
[ "Engineering" ]
5,143
[ "Aircraft configurations", "Aerospace engineering" ]
11,109
https://en.wikipedia.org/wiki/F%C3%A9lix%20Guattari
Pierre-Félix Guattari ( ; ; 30 March 1930 – 29 August 1992) was a French psychoanalyst, political philosopher, semiotician, social activist, and screenwriter. He co-founded schizoanalysis with Gilles Deleuze, and ecosophy with Arne Næss, and is best known for his literary and philosophical collaborations with Deleuze, most notably Anti-Oedipus (1972) and A Thousand Plateaus (1980), the two volumes of their theoretical work Capitalism and Schizophrenia. Biography Clinic of La Borde Guattari was born in Villeneuve-les-Sablons, a working-class suburb of northwest Paris, France. His father was a factory manager and he was engaged in Trotskyist political activism as a teenager, before studying and training under (and being analyzed by) the French psychoanalyst Jacques Lacan in the early 1950s. Subsequently, he worked all his life at the experimental psychiatric clinic of La Borde under the direction of Lacan's pupil, the psychiatrist Jean Oury. He first met Oury at a private psychiatric clinic in Saumery in the Loire region at the suggestion of Oury's brother Fernand, who had been Guattari's high school teacher. Guattari followed Oury to La Borde in 1955, two years after it had been established. La Borde was a venue for conversation among many students of philosophy, psychology, ethnology, and social work. One particularly novel orientation developed at La Borde consisted of the suspension of the classical analyst/analysand pair in favour of an open confrontation in group therapy. In contrast to the Freudian school's individualistic style of analysis, this practice studied the dynamics of several subjects in complex interaction. It led Guattari into a broader philosophical exploration of, and political engagement with, a vast array of intellectual and cultural domains (philosophy, ethnology, linguistics, architecture, etc.). 1960s–1970s From 1955 to 1965 Guattari edited and contributed to La Voie Communiste (Communist Way), a Trotskyist newspaper. He supported anti-colonialist struggles as well as the Italian Autonomists. Guattari also took part in the G.T.P.S.I., which gathered many psychiatrists at the beginning of the 1960s and created the Association of Institutional Psychotherapy in November 1965. It was at the same time that he founded, along with other militants, the F.G.E.R.I. (Federation of Groups for Institutional Study & Research) and its review Recherche (Research), working on philosophy, psychoanalysis, ethnology, education, mathematics, architecture, etc. The F.G.E.R.I. came to represent aspects of the multiple political and cultural engagements of Guattari: the Group for Young Hispanics, the Franco-Chinese Friendships (in the times of the people's communes), the opposition activities with the wars in Algeria and Vietnam, the participation in the M.N.E.F., with the U.N.E.F., the policy of the offices of psychological academic aid (B.A.P.U.), the organization of the University Working Groups (G.T.U.), but also the reorganizations of the training courses with the Centers of Training to the Methods of Education Activities (C.E.M.E.A.) for psychiatric male nurses, as well as the formation of a Fellowship of Nurses (Amicales d'infirmiers) (in 1958), the studies on architecture and the projects of construction of a day hospital for "students and young workers". In 1967 he appeared as one of the founders of OSARLA (Organization of solidarity and Aid to the Latin-American Revolution). In 1968, Guattari met Daniel Cohn-Bendit, Jean-Jacques Lebel, and Julian Beck. He was involved in the large-scale French protests of May 1968, starting from the Movement of 22 March. It was in the aftermath of 1968 that Guattari met Gilles Deleuze at the University of Vincennes. Then he began to lay the groundwork for Anti-Oedipus (1972), which Michel Foucault described as "an introduction to the non-fascist life" in his preface to the book. In 1970, he created ), which developed the approach explored in the Recherches journal. In 1973, Guattari was tried and fined for committing an "outrage to public decency" for publishing an issue of Recherches on homosexuality. In 1977, he created the CINEL for "new spaces of freedom" before joining the environmental movement with his "ecosophy" in the 1980s. 1980s to 1990s Guattari viewed the primary commodity produced under capitalism as subjectivity itself. According to Guattari, producing consuming subjects with novel desires satisfiable through continuing purchase of commodities and experiences is the precondition to creating a consumer society. In his last book, Chaosmosis (1992), Guattari returned to the question of subjectivity: "How to produce it, collect it, enrich it, reinvent it permanently in order to make it compatible with mutant Universes of value?" This concern runs through all of his works, from Psychoanalysis and Transversality (a collection of articles from 1957 to 1972), through Years of Winter (1980–1986) and Schizoanalytic Cartographies (1989), to his collaboration with Deleuze, What is Philosophy? (1991). In Chaosmosis, Guattari proposes an analysis of subjectivity in terms of four functors: (1) material, energetic, and semiotic fluxes; (2) concrete and abstract machinic phyla; (3) virtual universes of value; and (4) finite existential territories. This scheme attempts to grasp the heterogeneity of components involved in the production of subjectivity, as Guattari understands it, which include both signifying semiotic components as well as "a-signifying semiological dimensions" (which work "in parallel or independently of" any signifying function that they may have). Death and posthumous publications On 29 August 1992, two weeks after an interview for the Greek television curated by Yiorgos Veltsos, Guattari died in La Borde from a heart attack. In 1995, the posthumous release of Guattari's Chaosophy published essays and interviews concerning Guattari's work as director of the experimental La Borde clinic and his collaborations with Deleuze. The collection includes essays such as "Balance-Sheet Program for Desiring Machines," cosigned by Deleuze (with whom he had coauthored Anti-Oedipus and A Thousand Plateaus), and "Everybody Wants To Be a Fascist." It provides an introduction to Guattari's theories on "schizoanalysis", a process that develops Sigmund Freud's psychoanalysis but which pursues a more experimental and collective approach towards analysis. In 1996, another collection of Guattari's essays, lectures, and interviews, Soft Subversions, was published, which traces the development of his thought and activity throughout the 1980s ("the winter years"). His analyses of art, cinema, youth culture, economics, and power formations, develop concepts such as "micropolitics," "schizoanalysis," and "becoming-woman," which aim to liberate subjectivity and open up new horizons for political and creative resistance to the standardizing and homogenizing processes of global capitalism (which he calls "Integrated World Capitalism") in the "post-media era." For example, he used the term "micropolitics" to delimit a certain level of observation of social practices (the unconscious economy, where there is a certain flexibility in the expression of desire and institution) and, practically, to define, in a segregated world, the field of intervention of "people who work to interest themselves in the discourse of the other." Works Translated into English Deleuze, Gilles and Félix Guattari. 1972. Anti-Oedipus. Trans. Robert Hurley, Mark Seem and Helen R. Lane. London and New York: Continuum, 2004. Vol. 1 of Capitalism and Schizophrenia. 2 vols. 1972–1980. Trans. of L'Anti-Oedipe. Paris: Les Editions de Minuit. . 1975. Kafka: Toward a Minor Literature. Trans. Dana Polan. Theory and History of Literature 30. Minneapolis and London: U of Minnesota P, 1986. Trans. of Kafka: pour une littérature mineure. Paris: Les Editions de Minuit. . 1980. A Thousand Plateaus. Trans. Brian Massumi. London and New York: Continuum, 2004. Vol. 2 of Capitalism and Schizophrenia. 2 vols. 1972–1980. Trans. of Mille plateaux. Paris: Les Editions de Minuit. . 1991. What Is Philosophy?. Trans. Graham Burchell and Hugh Tomlinson. London and New York: Verso, 1994. Trans. of Qu'est-ce que la philosophie?. Paris: Les Editions de Minuit. . 1979. The Machinic Unconscious: Essays in Schizoanalysis. Trans. Taylor Adkins. Los Angeles, CA: Semiotext(e), 2011. Trans. of L'inconscient machinique: Essais de schizo-analyse. Paris: Recherches. 1977. Molecular Revolution: Psychiatry and Politics. Trans. Rosemary Sheed. Harmondsworth: Penguin, 1984. . 1989a. Schizoanalytic Cartographies. Trans Andrew Goffey. London and New York: Bloomsbury, 2013. Trans. of Cartographies schizoanalytiques. Paris: Editions Galilée . 1989b. The Three Ecologies. Trans. Ian Pindar and Paul Sutton. London and New York: Continuum, 2000. Trans. of Les trois écologies. Paris: Editions Galilée. . 1992. Chaosmosis: An Ethico-Aesthetic Paradigm. Trans. Paul Bains and Julian Pefanis. Bloomington and Indianapolis: Indiana UP, 1995. Trans. of Chaosmose. Paris: Editions Galilee. . 1995. Chaosophy (Texts and Interviews 1972 to 1977 ). Ed. Sylvère Lotringer. Semiotext(e) Foreign Agents Ser. New York: Semiotext(e). . 1996. Soft Subversions (Texts and Interviews 1977 to 1985). Ed. Sylvère Lotringer. Trans. David L. Sweet and Chet Wiener. Semiotext(e) Foreign Agents Ser. New York: Semiotext(e). . 1996. The Guattari Reader. Ed. Gary Genosko. Blackwell Readers ser. Oxford and Cambridge, MA: Blackwell. . 2006. The Anti-Oedipus Papers. Ed. Stéphane Nadaud. Trans. Kélina Gotman. New York: Semiotext(e). . 2015. Lines of Flight: For Another World of Possibilities. Bloomsbury Academic. . 2015. Machinic Eros: Writings on Japan. Eds. Gary Genosko and Jay Hetrick. Univocal Publishing. . 2015. Psychoanalysis and Transversality: Texts and Interviews 1955–1971. Trans. Ames Hodges. MIT Press. 2016. A Love of UIQ. Trans. Graeme Thomson and Silvia Maglioni. Minneapolis: University of Minnesota Press. Guattari, Félix, and Luiz Inácio Lula da Silva. 2003. The Party Without Bosses: Lessons on Anti-Capitalism From Guattari and Lula. Ed. Gary Genosko. Arbeiter Ring Publishing. . Guattari, Félix and Toni Negri. 1985. Communists Like Us: New Spaces of Liberty, New Lines of Alliance. Trans. Michael Ryan. Semiotext(e) Foreign Agents Ser. New York: Semiotext(e), 1990. Trans. of Nouvelles espaces de liberté. Paris: Bedon. . Guattari, Félix, and Suely Rolnik. 1986. Molecular Revolution in Brazil. New York: Semiotext(e), 2008. Trans. of Micropolitica: Cartografias do Desejo. . Untranslated Note: Many of the essays found in these works have been individually translated and can be found in the English collections. La révolution moléculaire (1977, 1980). The 1980 version (éditions 10/18) contains substantially different essays from the 1977 version. Les années d'hiver, 1980–1985 (1986). Other collaborations: L'intervention institutionnelle (Paris: Petite Bibliothèque Payot, n. 382 – 1980). On institutional pedagogy. With Jacques Ardoino, G. Lapassade, Gerard Mendel, Rene Lourau. Pratique de l'institutionnel et politique (1985). With Jean Oury and Francois Tosquelles. Desiderio e rivoluzione. Intervista a cura di Paolo Bertetto (Milan: Squilibri, 1977). Conversation with Franco Berardi (Bifo) and Paolo Bertetto. See also Anti-psychiatry Critical perspectives on psychoanalysis Criticism of capitalism Deinstitutionalisation Deleuze and Guattari History of capitalism References External links https://wikimania.wikimedia.org/wiki/User:Fr%C3%A9quence_Paris_Plurielle_FPP_radio_issus_de_la_mouvance_Autonome_de_F%C3%A9lix_Guattari#link_esterno Fractal Ontology (with unpublished, English translations of Guattari and others) Chimeres site on Guattari (in French) "Desire Was Everywhere" by Adam Shatz, London Review of Books, Vol. 32 No. 24 · 16 December 2010 1930 births 1992 deaths 20th-century French philosophers Academic staff of the University of Paris Analysands of Jacques Lacan Anti-psychiatry Autonomism Burials at Père Lachaise Cemetery Critical theorists French communists French political philosophers French psychoanalysts People from Oise Postmodern theory University of Paris alumni
Félix Guattari
[ "Engineering", "Biology" ]
3,007
[ "Biopolitics", "Genetic engineering" ]
11,123
https://en.wikipedia.org/wiki/Fornax
Fornax () is a constellation in the southern celestial hemisphere, partly ringed by the celestial river Eridanus. Its name is Latin for furnace. It was named by French astronomer Nicolas Louis de Lacaille in 1756. Fornax is one of the 88 modern constellations. The three brightest stars—Alpha, Beta and Nu Fornacis—form a flattened triangle facing south. With an apparent magnitude of 3.91, Alpha Fornacis is the brightest star in Fornax. Six star systems have been found to have exoplanets. The Fornax Dwarf galaxy is a small faint satellite galaxy of the Milky Way. NGC 1316 is a relatively close radio galaxy. The Hubble's Ultra-Deep Field is located within the Fornax constellation. It is the 41st largest constellation in the night-sky, occupying an area of 398 square degrees. It is located in the first quadrant of the southern hemisphere (SQ1) and can be seen at latitudes between +50° and -90° during the month of December. History The French astronomer Nicolas Louis de Lacaille first described the constellation in French as le Fourneau Chymique (the Chemical Furnace) with an alembic and receiver in his early catalogue, before abbreviating it to le Fourneau on his planisphere in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Lacaille Latinised the name to Fornax Chimiae on his 1763 chart. Characteristics The constellation Eridanus borders Fornax to the east, north and south, while Cetus, Sculptor and Phoenix gird it to the north, west and south respectively. Covering 397.5 square degrees and 0.964% of the night sky, it ranks 41st of the 88 constellations in size, The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "For". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 8 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −23.76° and −39.58°. The whole constellation is visible to observers south of latitude 50°N. Features Stars Lacaille gave Bayer designations to 27 stars now named Alpha to Omega Fornacis, labelling two stars 3.5 degrees apart as Gamma, three stars Eta, two stars Iota, two Lambda and three Chi. Phi Fornacis was added by Gould, and Theta and Omicron were dropped by Gould and Baily respectively. Upsilon, too, was later found to be two stars and designated as such. Overall, there are 59 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5. However, there are no stars brighter than the fourth magnitude. The three brightest stars form a flattish triangle, with Alpha (also called Dalim) and Nu Fornacis marking its eastern and western points and Beta Fornacis marking the shallow southern apex. Originally designated 12 Eridani by John Flamsteed, Alpha Fornacis was named by Lacaille as the brightest star in the new constellation. It is a binary star that can be resolved by small amateur telescopes. With an apparent magnitude of 3.91, the primary is a yellow-white subgiant 1.21 times as massive as the Sun that has begun to cool and expand after exhausting its core hydrogen, having swollen to 1.9 times the Sun's radius. Of magnitude 6.5, the secondary star is 0.78 times as massive as the Sun. It has been identified as a blue straggler, and has either accumulated material from, or merged with, a third star in the past. It is a strong source of X-rays. The pair is 46.4 ± 0.3 light-years distant from Earth. Beta Fornacis is a yellow-hued giant star of spectral type G8IIIb of magnitude 4.5 that has cooled and swelled to 11 times the Sun's diameter, 178 ± 2 light-years from Earth. It is a red clump giant, which means it has undergone helium flash and is currently generating energy through the fusion of helium at its core. Nu Fornacis is 370 ± 10 light-years distant from Earth. It is a blue giant star of spectral type B9.5IIIspSi that is 3.65 ± 0.18 times as massive and around 245 times as luminous as the Sun, with 3.2 ± 0.4 times its diameter. It varies in luminosity over a period of 1.89 days—the same as its rotational period. This is because of differences in abundances of metals in its atmosphere; it belongs to a class of star known as an Alpha2 Canum Venaticorum variable. Shining with an apparent magnitude of 5.89, Epsilon Fornacis is a binary star system located 104.4 ± 0.3 light-years distant from Earth. Its component stars orbit each other every 37 years. The primary star is around 12 billion years old and has cooled and expanded to 2.53 times the diameter of the Sun, while having only 91% of its mass. Omega Fornacis is a binary star system composed of a blue main-sequence star of spectral type B9.5V and magnitude 4.96, and a white main sequence star of spectral type A7V and magnitude 7.88. The system is 470 ± 10 light-years distant from Earth. Kappa Fornacis is a triple star system composed of a yellow giant and a pair of red dwarfs. R Fornacis is a long-period variable and carbon star. LP 944-20 is a brown dwarf of spectral type M9 that has around 7% the mass of the Sun. Approximately 21 light-years distant from Earth, it is a faint object with an apparent magnitude of 18.69. Observations published in 2007 showed that the atmosphere of LP 944-20 contains much lithium and that it has dusty clouds. Smaller and less luminous still is 2MASS 0243-2453, a T-type brown dwarf of spectral type T6. With a surface temperature of 1040–1100 K, it has 2.4–4.1% the mass of the Sun, a diameter 9.2 to 10.6% of that of the Sun, and an age of 0.4–1.7 billion years. Six star systems in Fornax have been found to have planets: Lambda2 Fornacis is a star about 1.2 times as massive as the Sun with a planet about as massive as Neptune, discovered by doppler spectroscopy in 2009. The planet has an orbit of around 17.24 days. HD 20868 is an orange dwarf with a mass around 78% that of the Sun, 151 ± 10 light-years away from Earth. It was found to have an orbiting planet approximately double the mass of Jupiter with a period of 380 days. WASP-72 is a star around 1.4 times as massive that has begun to cool and expand off the main sequence, reaching double the Sun's diameter. It has a planet around as massive as Jupiter orbiting it every 2.2 days. HD 20781 and HD 20782 are a pair of sunlike yellow main sequence stars that orbit each other. Each has been found to have planets. HR 858 is a near naked eye visible star in Fornax, 31.3 parsecs away. In May 2019, it was announced to have at least 3 exoplanets as observed by transit method of the Transiting Exoplanet Survey Satellite. Deep-sky objects Local Group NGC 1049 is a globular cluster 500,000 light-years from Earth. It is in the Fornax Dwarf Galaxy. NGC 1360 is a planetary nebula in Fornax with a magnitude of approximately 9.0, 1,280 light-years from Earth. Its central star is of magnitude 11.4, an unusually bright specimen. It is five times the size of the famed Ring Nebula in Lyra at 6.5 arcminutes. Unlike the Ring Nebula, NGC 1360 is clearly elliptical. The Fornax Dwarf galaxy is a dwarf galaxy that is part of the Local Group of galaxies. It is not visible in amateur telescopes, despite its relatively small distance of 500,000 light-years. Helmi stream is a small galactic stream in Fornax. This small galaxy was destroyed by Milky Way 6 billion years ago. There was candidate for extragalactic planet, HIP 13044 b. Outside NGC 1097 is a barred spiral galaxy in Fornax, about 45 million light-years from Earth. At magnitude 9, it is visible in medium amateur telescopes. It is notable as a Seyfert galaxy with strong spectral emissions indicating ionized gases and a central supermassive black hole. Fornax Cluster The Fornax Cluster is a cluster of galaxies lying at a distance of 19 megaparsecs (62 million light-years). It is the second richest galaxy cluster within 100 million light-years, after the considerably larger Virgo Cluster, and may be associated with the nearby Eridanus Group. It lies primarily in the constellation Fornax, with its southern boundaries partially crossing into the constellation of Eridanus, and covers an area of sky about 6° across or about 28 sq degrees. The Fornax cluster is a part of larger Fornax Wall. Down are some famous objects in this cluster: NGC 1365 is another barred spiral galaxy located at a distance of 56 million light-years from Earth. Like NGC 1097, it is also a Seyfert galaxy. Its bar is a center of star formation and shows extensions of the spiral arms' dust lanes. The bright nucleus indicates the presence of an active galactic nucleus – a galaxy with a supermassive black hole at the center, accreting matter from the bar. It is a 10th magnitude galaxy associated with the Fornax Cluster. Fornax A is a radio galaxy with extensive radio lobes that corresponds to the optical galaxy NGC 1316, a 9th-magnitude galaxy. One of the closer active galaxies to Earth at a distance of 62 million light-years, Fornax A appears in the optical spectrum as a large elliptical galaxy with dust lanes near its core. These dust lanes have caused astronomers to discern that it recently merged with a small spiral galaxy. Because it has a high rate of type Ia supernovae, NGC 1316 has been used to determine the size of the universe. The jets producing the radio lobes are not particularly powerful, giving the lobes a more diffuse, knotted structure due to interactions with the intergalactic medium. Associated with this peculiar galaxy is an entire cluster of galaxies. NGC 1399 is a large elliptical galaxy in the Southern constellation Fornax, the central galaxy in the Fornax cluster. The galaxy is 66 million light-years away from Earth. With a diameter of 130 000 light-years, it is one of the largest galaxies in the Fornax cluster and slightly larger than Milky Way. William Herschel discovered this galaxy on October 22, 1835. NGC 1386 is a spiral galaxy located in the constellation Eridanus. It is located at a distance of circa 53 million light years from Earth and has apparent dimensions of 3.89' x 1.349'. It is a Seyfert galaxy, the only one in Fornax Cluster. NGC 1427A is an irregular galaxy in the constellation Eridanus. Its distance modulus has been estimated using the globular cluster luminosity function to be 31.01 ± 0.21 which is about 52 Mly. It is the brightest dwarf irregular member of the Fornax cluster and is in the foreground of the cluster's central galaxy NGC 1399. NGC 1460 is a barred lenticular galaxy in the constellation Eridanus. It was discovered by John Herschel on November 28, 1837. It is moving away from the Milky Way 1341 km/s. NGC 1460 has a Hubble classification of SB0, which indicates it is a barred lenticular galaxy. But, this one contains a huge bar at its core. The bar is spreading from center to the edge of the galaxy, as seen on Hubble image in the box. This bar is one of the largest seen in barred lenticular galaxies. There are also first ultracompact dwarf galaxies discovered. Distant universe Fornax has been the target of investigations into the furthest reaches of the universe. The Hubble Ultra Deep Field is located within Fornax, and the Fornax Cluster, a small cluster of galaxies, lies primarily within Fornax. At a meeting of the Royal Astronomical Society in Britain, a team from University of Queensland described 40 unknown "dwarf" galaxies in this constellation; follow-up observations with the Hubble Space Telescope and the European Southern Observatory's Very Large Telescope revealed that ultra compact dwarfs are much smaller than previously known dwarf galaxies, about across. ` UDFj-39546284 is a candidate protogalaxy located in Fornax, although recent analyses have suggested it is likely to be a lower redshift source. GRB 190114C was a notable gamma ray burst explosion from a galaxy 4.5 billion light years away near the Fornax constellation, that was initially detected in January 2019. According to astronomers, "the brightest light ever seen from Earth [to date] ... [the] biggest explosion in the Universe since the Big Bang". Equivalents In Chinese astronomy, the stars that correspond to Fornax are within the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ). See also Fornax (Chinese astronomy) Notes References Cited texts Ian Ridpath and Wil Tirion (2007). Stars and Planets Guide, Collins, London. . Princeton University Press, Princeton. . External links The Deep Photographic Guide to the Constellations: Fornax Starry Night Photography – Fornax Constellation The clickable Fornax Southern constellations Constellations listed by Lacaille
Fornax
[ "Astronomy" ]
2,982
[ "Fornax", "Southern constellations", "Constellations", "Constellations listed by Lacaille" ]
11,129
https://en.wikipedia.org/wiki/Flamsteed%20designation
A Flamsteed designation is a combination of a number and constellation name that uniquely identifies most naked eye stars in the modern constellations visible from southern England. They are named for John Flamsteed who first used them while compiling his Historia Coelestis Britannica. (Flamsteed used a telescope, and the catalog also includes some stars which are relatively bright but not necessarily visible with the naked eye.) Description Flamsteed designations for stars are similar to Bayer designations, except that they use numbers instead of Greek and Roman letters. Each star is assigned a number and the Latin genitive of the constellation it lies in (see 88 modern constellations for a list of constellations and the genitive forms of their names). Flamsteed designations were assigned to 2554 stars. The numbers were originally assigned in order of increasing right ascension within each constellation, but due to the effects of precession they are now slightly out of order in some places. This method of designating stars first appeared in a preliminary version of John Flamsteed's Historia Coelestis Britannica published by Edmond Halley and Isaac Newton in 1712 without Flamsteed's approval. The final version of Flamsteed's catalogue published in 1725 after his death omitted the numerical designations altogether. The numbers now in use were assigned by the French astronomer, Joseph Jérôme de Lalande and appeared in his 1783 almanac, Éphémérides des mouvemens célestes which contained a revised edition of Flamsteed's catalogue. Lalande noted in his Introduction that he got the idea from the unofficial 1712 edition. Flamsteed designations gained popularity throughout the eighteenth century, and are now commonly used when no Bayer designation exists. Where a Bayer designation with a Greek letter does exist for a star, it is usually used in preference to the Flamsteed designation. (Flamsteed numbers are generally preferred to Bayer designations with Roman letters.) Examples of well-known stars that are usually referred to by their Flamsteed numbers include 51 Pegasi, and 61 Cygni. Flamsteed designations are often used instead of the Bayer designation if the latter contains an extra attached number; for example, "55 Cancri" is more common than "Rho1 Cancri". There are examples of stars, such as 10 Ursae Majoris in Lynx, bearing Flamsteed designations for constellations in which they do not lie, just as there are for Bayer designations, because of the compromises that had to be made when the modern constellation boundaries were drawn up. Flamsteed's catalogue covered only the stars visible from Great Britain, and therefore stars of the far southern constellations have no Flamsteed numbers. Some stars, such as the nearby star 82 Eridani, were named in a major southern-hemisphere catalog called Uranometria Argentina, by Benjamin Gould; these are Gould numbers, rather than Flamsteed numbers, and should be differentiated with a G, as in 82 G. Eridani. Except for a handful of cases, Gould numbers are not in common use. Similarly, Flamsteed-like designations assigned by other astronomers (for example, Hevelius) are no longer in general use. (A well-known exception is the globular cluster 47 Tucanae from Bode's catalog.) 84 stars entered in Flamsteed's catalog are errors and proved not to exist in the sky: All of them except 11 Vulpeculae were plotted on his star charts. Flamsteed observed Uranus in 1690 but did not recognize it as a planet and entered it into his catalog as a star called "34 Tauri". 11 Vulpeculae was a nova, now known as CK Vulpeculae. Many of them were caused by arithmetic errors made by Flamsteed. List of constellations using Flamsteed star designations There are 52 constellations that primarily use Flamsteed designations. Stars are listed in the appropriate lists for the constellation, as follows: In addition, several stars in Puppis, and a small number of stars in Centaurus and Lupus, have been given Flamsteed designations. See also Stellar designations and names Table of stars with Flamsteed designations References External links Flamsteed numbers – where they really came from Ian Ridpath's Star Tales Astronomical catalogues
Flamsteed designation
[ "Astronomy" ]
917
[ "Astronomical catalogues", "Astronomical objects", "Works about astronomy" ]
11,144
https://en.wikipedia.org/wiki/Fresco
Fresco ( or frescoes) is a technique of mural painting executed upon freshly laid ("wet") lime plaster. Water is used as the vehicle for the dry-powder pigment to merge with the plaster, and with the setting of the plaster, the painting becomes an integral part of the wall. The word fresco () is derived from the Italian adjective fresco meaning "fresh", and may thus be contrasted with fresco-secco or secco mural painting techniques, which are applied to dried plaster, to supplement painting in fresco. The fresco technique has been employed since antiquity and is closely associated with Italian Renaissance painting. The word fresco is commonly and inaccurately used in English to refer to any wall painting regardless of the plaster technology or binding medium. This, in part, contributes to a misconception that the most geographically and temporally common wall painting technology was the painting into wet lime plaster. Even in apparently buon fresco technology, the use of supplementary organic materials was widespread, if underrecognized. Technology Buon fresco pigment is mixed with room temperature water and is used on a thin layer of wet, fresh plaster, called the intonaco (after the Italian word for plaster). Because of the chemical makeup of the plaster, a binder is not required, as the pigment mixed solely with the water will sink into the intonaco, which itself becomes the medium holding the pigment. The pigment is absorbed by the wet plaster; after a number of hours, the plaster dries in reaction to air: it is this chemical reaction which fixes the pigment particles in the plaster. The chemical processes are as follows: calcination of limestone in a lime kiln: CaCO3 → CaO + CO2 slaking of quicklime: CaO + H2O → Ca(OH)2 setting of the lime plaster: Ca(OH)2 + CO2 → CaCO3 + H2O In painting buon fresco, a rough underlayer called the arriccio is added to the whole area to be painted and allowed to dry for some days. Many artists sketched their compositions on this underlayer, which would never be seen, in a red pigment called sinopia, a name also used to refer to these under-paintings. Later,new techniques for transferring paper drawings to the wall were developed. The main lines of a drawing made on paper were pricked over with a point, the paper held against the wall, and a bag of soot (spolvero) banged on them to produce black dots along the lines. If the painting was to be done over an existing fresco, the surface would be roughened to provide better adhesion. On the day of painting, the intonaco, a thinner, smooth layer of fine plaster was added to the amount of wall that was expected to be completed that day, sometimes matching the contours of the figures or the landscape, but more often just starting from the top of the composition. This area is called the giornata ("day's work"), and the different day stages can usually be seen in a large fresco, by a faint seam that separates one from the next. Buon frescoes are difficult to create because of the deadline associated with the drying plaster. Generally, a layer of plaster will require ten to twelve hours to dry; ideally, an artist would begin to paint after one hour and continue until two hours before the drying time—giving seven to nine hours' working time. Once a giornata is dried, no more buon fresco can be done, and the unpainted intonaco must be removed with a tool before starting again the next day. If mistakes have been made, it may also be necessary to remove the whole intonaco for that area—or to change them later, a secco. An indispensable component of this process is the carbonatation of the lime, which fixes the colour in the plaster ensuring durability of the fresco for future generations. A technique used in the popular frescoes of Michelangelo and Raphael was to scrape indentations into certain areas of the plaster while still wet to increase the illusion of depth and to accent certain areas over others. The eyes of the people of the School of Athens are sunken-in using this technique which causes the eyes to seem deeper and more pensive. Michelangelo used this technique as part of his trademark 'outlining' of his central figures within his frescoes. In a wall-sized fresco, there may be ten to twenty or even more giornate, or separate areas of plaster. After five centuries, the giornate, which were originally nearly invisible, have sometimes become visible, and in many large-scale frescoes, these divisions may be seen from the ground. Additionally, the border between giornate was often covered by an a secco painting, which has since fallen off. One of the first painters in the post-classical period to use this technique was the Isaac Master (or Master of the Isaac fresco, and thus a name used to refer to the unknown master of a particular painting) in the Upper Basilica of Saint Francis in Assisi. A person who creates fresco is called a frescoist. Other types of wall painting A secco or fresco-secco painting is done on dry plaster (secco meaning "dry" in Italian). The pigments thus require a binding medium, such as egg (tempera), glue or oil to attach the pigment to the wall. It is important to distinguish between a secco work done on top of buon fresco, which according to most authorities was in fact standard from the Middle Ages onwards, and work done entirely a secco on a blank wall. Generally, buon fresco works are more durable than any a secco work added on top of them, because a secco work lasts better with a roughened plaster surface, whilst true fresco should have a smooth one. The additional a secco work would be done to make changes, and sometimes to add small details, but also because not all colours can be achieved in true fresco, because only some pigments work chemically in the very alkaline environment of fresh lime-based plaster. Blue was a particular problem, and skies and blue robes were often added a secco, because neither azurite blue nor lapis lazuli, the only two blue pigments then available, works well in wet fresco. It has also become increasingly clear, thanks to modern analytical techniques, that even in the early Italian Renaissance painters quite frequently employed a secco techniques so as to allow the use of a broader range of pigments. In most early examples this work has now entirely vanished, but a whole painting done a secco on a surface roughened to give a key for the paint may survive very well, although damp is more threatening to it than to buon fresco. A third type called a mezzo-fresco is painted on nearly dry intonaco—firm enough not to take a thumb-print, says the sixteenth-century author Ignazio Pozzo—so that the pigment only penetrates slightly into the plaster. By the end of the sixteenth century this had largely displaced buon fresco, and was used by painters such as Giovanni Battista Tiepolo or Michelangelo. This technique had, in reduced form, the advantages of a secco work. The three key advantages of work done entirely a secco were that it was quicker, mistakes could be corrected, and the colours varied less from when applied to when fully dry—in wet fresco there was a considerable change. For wholly a secco work, the intonaco is laid with a rougher finish, allowed to dry completely and then usually given a key by rubbing with sand. The painter then proceeds much as he or she would on a canvas or wood panel. History Egypt and ancient Near East The first known Egyptian fresco was found in Tomb 100 at Hierakonpolis, and dated to . Several of the themes and designs visible in the fresco are otherwise known from other Naqada II objects, such as the Gebel el-Arak Knife. It shows the scene of a "Master of Animals", a man fighting against two lions, individual fighting scenes, and Egyptian and foreign boats. Ancient Egyptians painted many tombs and houses, but those wall paintings are not frescoes. An old fresco from Mesopotamia is the Investiture of Zimri-Lim (modern Syria), dating from the early 18th century BC. Aegean civilizations The oldest frescoes done in the buon fresco method date from the first half of the second millennium BCE during the Bronze Age and are to be found among Aegean civilizations, more precisely Minoan art from the island of Crete and other islands of the Aegean Sea. The most famous of these, the Bull-Leaping Fresco, depicts a sacred ceremony in which individuals jump over the backs of large bulls. The oldest surviving Minoan frescoes are found on the island of Santorini (classically known as Thera), dated to the Neo-Palatial period (). While some similar frescoes have been found in other locations around the Mediterranean basin, particularly in Egypt and Morocco, their origins are subject to speculation. Some art historians believe that fresco artists from Crete may have been sent to various locations as part of a trade exchange, a possibility which raises to the fore the importance of this art form within the society of the times. The most common form of fresco was Egyptian wall paintings in tombs, usually using the a secco technique. Classical antiquity Frescoes were also painted in ancient Greece, but few of these works have survived. In southern Italy, at Paestum, which was a Greek colony of the Magna Graecia, a tomb containing frescoes dating back to 470 BC, the so-called Tomb of the Diver, was discovered in June 1968. These frescoes depict scenes of the life and society of ancient Greece, and constitute valuable historical testimonials. One shows a group of men reclining at a symposium, while another shows a young man diving into the sea. Etruscan frescoes, dating from the 4th century BC, have been found in the Tomb of Orcus near Veii, Italy. The richly decorated Thracian frescoes of the Tomb of Kazanlak are dating back to 4th century BC, making it a UNESCO protected World Heritage Site. Roman wall paintings, such as those at the magnificent Villa dei Misteri (1st century BC) in the ruins of Pompeii, and others at Herculaneum, were completed in buon fresco. Roman (Christian) frescoes from the 1st to 2nd centuries AD were found in catacombs beneath Rome, and Byzantine icons were also found in Cyprus, Crete, Ephesus, Cappadocia, and Antioch. Roman frescoes were done by the artist painting the artwork on the still damp plaster of the wall, so that the painting is part of the wall, actually colored plaster. Also a historical collection of Ancient Christian frescoes can be found in the Churches of Göreme. India Thanks to large number of ancient rock-cut cave temples, valuable ancient and early medieval frescoes have been preserved in more than 20 locations of India. The frescoes on the ceilings and walls of the Ajanta Caves were painted between and are the oldest known frescoes in India. They depict the Jataka tales that are stories of the Buddha's life in former existences as Bodhisattva. The narrative episodes are depicted one after another although not in a linear order. Their identification has been a core area of research on the subject since the time of the site's rediscovery in 1819. Other locations with valuable preserved ancient and early medieval frescoes include Bagh Caves, Ellora Caves, Sittanavasal, Armamalai Cave, Badami Cave Temples and other locations. Frescoes have been made in several techniques, including tempera technique. The later Chola paintings were discovered in 1931 within the circumambulatory passage of the Brihadisvara Temple in India and are the first Chola specimens discovered. Researchers have discovered the technique used in these frescos. A smooth batter of limestone mixture was applied over the stones, which took two to three days to set. Within that short span, such large paintings were painted with natural organic pigments. During the Nayak period, the Chola paintings were painted over. The Chola frescos lying underneath have an ardent spirit of saivism expressed in them. They probably synchronised with the completion of the temple by Rajaraja Cholan the Great. The frescoes in Dogra/ Pahari style paintings exist in their unique form at Sheesh Mahal of Ramnagar (105 km from Jammu and 35 km west of Udhampur). Scenes from epics of Mahabharat and Ramayan along with portraits of local lords form the subject matter of these wall paintings. Rang Mahal of Chamba (Himachal Pradesh) is another site of historic Dogri fresco with wall paintings depicting scenes of Draupti Cheer Haran, and Radha- Krishna Leela. This can be seen preserved at National Museum at New Delhi in a chamber called Chamba Rang Mahal. During the Mughal Era, frescos were used for making interior design on walls and inside the ceilings of domes. Sri Lanka The Sigiriya Frescoes are found in Sigiriya in Sri Lanka. Painted during the reign of King Kashyapa I (ruled 477 – 495 AD). The generally accepted view is that they are portrayals of women of the royal court of the king depicted as celestial nymphs showering flowers upon the humans below. They bear some resemblance to the Gupta style of painting found in the Ajanta Caves in India. They are, however, far more enlivened and colorful and uniquely Sri Lankan in character. While some scholars contend that these frescos are the only surviving secular art from antiquity found in Sri Lanka today, others argue that they are Buddhist in nature (potentially representing goddesses from Tusita heaven) The painting technique used on the Sigiriya paintings is "fresco lustro". It varies slightly from the pure fresco technique in that it also contains a mild binding agent or glue. This gives the painting added durability, as clearly demonstrated by the fact that they have survived, exposed to the elements, for over 1,500 years. Located in a small sheltered depression a hundred meters above ground only 19 survive today. Ancient references, however, refer to the existence of as many as five hundred of these frescoes. Middle Ages The late Medieval period and the Renaissance saw the most prominent use of fresco, particularly in Italy, where most churches and many government buildings still feature fresco decoration. This change coincided with the reevaluation of murals in the liturgy. Romanesque churches in Catalonia were richly painted in 12th and 13th century, with both decorative and educational—for the illiterate faithfuls—roles, as can be seen in the MNAC in Barcelona, where is kept a large collection of Catalan romanesque art. In Denmark too, church wall paintings or kalkmalerier were widely used in the Middle Ages (first Romanesque, then Gothic) and can be seen in some 600 Danish churches as well as in churches in the south of Sweden, which was Danish at the time. One of the rare examples of Islamic fresco painting can be seen in Qasr Amra, the desert palace of the Umayyads in the 8th century Magotez. Early modern Europe Fresco painting continued into the Baroque in southern Europe, for churches and especially palaces. Gianbattista Tiepolo was arguably the last major exponent of this tradition, with huge schemes for palaces in Madrid and Würzburg in Germany. Northern Romania (historical region of Moldavia) boasts about a dozen painted monasteries, completely covered with frescos inside and out, that date from the last quarter of the 15th century to the second quarter of the 16th century. The most remarkable are the monastic foundations at Voroneţ (1487), Arbore (1503), Humor (1530), and Moldoviţa (1532). Suceviţa, dating from 1600, represents a late return to the style developed some 70 years earlier. The tradition of painted churches continued into the 19th century in other parts of Romania, although never to the same extent. Henri Clément Serveau produced several frescos including a three by six meter painting for the Lycée de Meaux, where he was once a student. He directed the École de fresques at , and decorated the Pavillon du Tourisme at the 1937 (Paris), Pavillon de la Ville de Paris; now at Musée d'Art Moderne de la Ville de Paris. In 1954 he realized a fresco for the Cité Ouvrière du Laboratoire Débat, Garches. He also executed mural decorations for the Plan des anciennes enceintes de Paris in the Musée Carnavalet. The Foujita chapel in Reims completed in 1966, is an example of modern frescos, the interior being painted with religious scenes by the School of Paris painter Tsuguharu Foujita. In 1996, it was designated an historic monument by the French government. Mexican muralism José Clemente Orozco, Fernando Leal, David Siqueiros and Diego Rivera the famous Mexican artists, renewed the art of fresco painting in the 20th century. Orozco, Siqueiros, Rivera and his wife Frida Kahlo contributed more to the history of Mexican fine arts and to the reputation of Mexican art in general than anybody else. Channeling pre-Columbian Mexican artworks including the true frescoes at Teotihuacan, Orozco, Siqueiros, River and Fernando Leal established the art movement known as Mexican Muralism. Contemporary There have been comparatively few frescoes created since the 1960s but there are some significant exceptions. The American artist, Brice Marden's monochrome works first shown in 1966 at Bykert Gallery, New York were inspired by frescos and "watching masons plastering stucco walls." While Marden employed the imagistic effects of fresco, David Novros was developing a 50-year practice around the technique. David Novros is an American painter and a muralist of geometric abstraction. In 1968 Donald Judd commissioned Novros to create a work at 101 Spring Street, New York, NY soon after he had purchased the building. Novros used medieval techniques to create the mural by "first preparing a full-scale cartoon, which he transferred to the wet plaster using the traditional pouncing technique," the act of passing powdered pigment onto the plaster through tiny perforations in a cartoon. The surface unity of the fresco was important to Novros in that the pigment he used bonded with the drying plaster, becoming part of the wall rather than a surface coating. This site-specific work was Novros's first true fresco, which was restored by the artist in 2013. The American painter, James Hyde first presented frescoes in New York at the Esther Rand Gallery, Thompkins Square Park in 1985. At that time Hyde was using true fresco technique on small panels made of cast concrete arranged on the wall. Throughout the next decade Hyde experimented with multiple rigid supports for the fresco plaster including composite board and plate glass. In 1991 at John Good Gallery in New York City, Hyde debuted true fresco applied on an enormous block of Styrofoam. Holland Cotter of the New York Times described the work as "objectifying some of the individual elements that have made modern paintings paintings." While Hyde's work "ranges from paintings on photographic prints to large-scale installations, photography, and abstract furniture design" his frescoes on Styrofoam have been a significant form of his work since the 1980s. The frescoes have been shown throughout Europe and the United States. In ArtForum David Pagel wrote, "like ruins from some future archaeological dig, Hyde's nonrepresentational frescoes on large chunks of Styrofoam give suggestive shape to the fleeting landscape of the present." Over its long history, practitioners of frescoes always took a careful methodological approach. Hyde's frescoes are done improvisationally. The contemporary disposability of the Styrofoam structure contrast the permanence of the classical fresco technique. In 1993, Hyde mounted four automobile sized frescoes on Styrofoam suspended from a brick wall. Progressive Insurance commissioned this site-specific work for the monumental 80- foot atrium in their headquarters in Cleveland, Ohio. Selected examples of frescoes Ancient and Early Medieval Ancient Aegean frescoes Etruscan tomb frescoes Frescoes of Pompeii Frescoes from the Roman catacombs (see also Early Christian art and architecture) Castelseprio North Macedonia Church of Saint Panteleimon, Gorno Nerezi Church "Holy Mother of God Perivleptos" - Ohrid Church of St. George, Staro Nagoričane Bulgaria Alexander Nevsky Cathedral, Sofia Bachkovo Monastery Boyana Church Church of St. George, Sofia Rila Monastery Rock-hewn Churches of Ivanovo Roman Tomb (Silistra) Thracian Tomb of Kazanlak Thracian tomb of Aleksandrovo Transfiguration Monastery Colombia Santiago Martinez Delgado frescoed a mural in the Colombian Congress Building, and also in the Colombian National Building. Czechia Rotunda of Saint Catherine in Znojmo France Saint-Esprit, Paris Palais des Papes, Avignon, 14th-century frescoes by Matteo Giovanetti Abbey Church of Saint-Savin-sur-Gartempe, Romanesque frescoes Albi cathedral, Renaissance frescoes Val-de-Grâce, Paris, Baroque fresco by Pierre Mignard on the cupola Italy Late Medieval-Quattrocento Panels (including Giotto(?), Lorenzetti, Martini and others) in upper and lower Basilica of San Francesco d'Assisi Giotto, Cappella degli Scrovegni (Arena Chapel), Padua Camposanto, Pisa Masaccio, Brancacci Chapel, Santa Maria del Carmine, Florence Ambrogio Lorenzetti, Palazzo Pubblico, Siena Piero della Francesca, Chiesa di San Francesco, Arezzo Ghirlandaio, Cappella Tornabuoni, Santa Maria Novella, Florence The Last Supper, Leonardo da Vinci, Milan (technically a tempera on plaster and stone, not a true fresco) Sistine Chapel Wall series: Botticelli, Perugino, Rossellini, Signorelli, and Ghirlandaio Luca Signorelli, Chapel of San Brizio, Duomo, Orvieto High Renaissance Michelangelo, Sistine Chapel ceiling Raphael, Raphael Rooms Raphael, Villa Farnesina Giulio Romano's Palazzo del Tè, Mantua Mantegna, Camera degli Sposi, Palazzo Ducale, Mantua The dome of the Florence Cathedral The Loves of the Gods, Annibale Carracci, Palazzo Farnese, Rome Allegory of Divine Providence and Barberini Power, Pietro da Cortona, Palazzo Barberini Ceilings, Giovanni Battista Tiepolo, (New Residenz) Würzburg, (Royal Palace) Madrid, (Villa Pisani) Stra, and others; Wall scenes (Villa Valmarana and Palazzo Labia) Nave ceiling, Andrea Pozzo, Sant'Ignazio, Rome Mexico Fresco Cycle of The Miracles of the Virgin of Guadalupe by Fernando Leal, at Basilica of Guadalupe, Mexico City Fresco Cycle of Bolivar's Epic by Fernando Leal, at Colegio de San Ildefonso, Mexico City Note: Fresco cycle, a series of frescos done about a particular subject The Netherlands Château St. Gerlach Serbian Medieval Visoki Dečani Gračanica monastery Studenica monastery Mileševa monastery United States Prometheus in Pomona College's Frary Dining Hall. Painted in 1930 by José Clemente Orozco, it is the first example of a modern, Mexican fresco mural in the U.S. St. Ann Arts and Cultural Center in Woonsocket, RI. Home of the largest collection of fresco paintings in North America. Conservation of frescoes The climate and environment of Venice has proved to be a problem for frescoes and other works of art in the city for centuries. The city is built on a lagoon in northern Italy. The humidity and the rise of water over the centuries have created a phenomenon known as rising damp. As the lagoon water rises and seeps into the foundation of a building, the water is absorbed and rises up through the walls often causing damage to frescoes. Venetians have become quite adept in the conservation methods of frescoes. The mold aspergillus versicolor can grow after flooding, to consume nutrients from frescoes. The following is the process that was used when rescuing frescoes in La Fenice, a Venetian opera house, but the same process can be used for similarly damaged frescoes. First, a protection and support bandage of cotton gauze and polyvinyl alcohol is applied. Difficult sections are removed with soft brushes and localized vacuuming. The other areas that are easier to remove (because they had been damaged by less water) are removed with a paper pulp compress saturated with bicarbonate of ammonia solutions and removed with deionized water. These sections are strengthened and reattached then cleansed with base exchange resin compresses and the wall and pictorial layer were strengthened with barium hydrate. The cracks and detachments are stopped with lime putty and injected with an epoxy resin loaded with micronized silica. Gallery See also Church frescos in Denmark Church frescos in Sweden Gambier Parry process Haveli Kandyan period frescoes References External links Museum of Ancient Inventions: Roman-Style Fresco, Italy, 50 AD Sigiriya Frescoes, The Mary B. Wheeler Collection, University of Pennsylvania Library Fresco Painting materials Painting techniques Plastering Wallcoverings
Fresco
[ "Chemistry", "Engineering" ]
5,268
[ "Building engineering", "Coatings", "Plastering" ]